Add Batch 6430854c-41e2-4dfe-8750-b7b0f5419fce
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/f32a19e7-8ead-4470-8dcd-ee0e042554b2_content_list.json +3 -0
- 3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/f32a19e7-8ead-4470-8dcd-ee0e042554b2_model.json +3 -0
- 3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/f32a19e7-8ead-4470-8dcd-ee0e042554b2_origin.pdf +3 -0
- 3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/full.md +330 -0
- 3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/images.zip +3 -0
- 3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/layout.json +3 -0
- 3dgsdragdragginggaussiansforintuitivepointbased3dediting/3772ab48-3125-4695-a4cf-9b4e8a378d76_content_list.json +3 -0
- 3dgsdragdragginggaussiansforintuitivepointbased3dediting/3772ab48-3125-4695-a4cf-9b4e8a378d76_model.json +3 -0
- 3dgsdragdragginggaussiansforintuitivepointbased3dediting/3772ab48-3125-4695-a4cf-9b4e8a378d76_origin.pdf +3 -0
- 3dgsdragdragginggaussiansforintuitivepointbased3dediting/full.md +484 -0
- 3dgsdragdragginggaussiansforintuitivepointbased3dediting/images.zip +3 -0
- 3dgsdragdragginggaussiansforintuitivepointbased3dediting/layout.json +3 -0
- 3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/5b6c955e-a799-4e63-99df-a2223da2aaec_content_list.json +3 -0
- 3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/5b6c955e-a799-4e63-99df-a2223da2aaec_model.json +3 -0
- 3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/5b6c955e-a799-4e63-99df-a2223da2aaec_origin.pdf +3 -0
- 3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/full.md +407 -0
- 3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/images.zip +3 -0
- 3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/layout.json +3 -0
- 3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/5336a4e7-b106-4ad1-a8ae-4707462c1dc6_content_list.json +3 -0
- 3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/5336a4e7-b106-4ad1-a8ae-4707462c1dc6_model.json +3 -0
- 3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/5336a4e7-b106-4ad1-a8ae-4707462c1dc6_origin.pdf +3 -0
- 3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/full.md +439 -0
- 3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/images.zip +3 -0
- 3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/layout.json +3 -0
- 3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/7fe24d6b-0196-4bae-b5c6-92593a9ce526_content_list.json +3 -0
- 3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/7fe24d6b-0196-4bae-b5c6-92593a9ce526_model.json +3 -0
- 3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/7fe24d6b-0196-4bae-b5c6-92593a9ce526_origin.pdf +3 -0
- 3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/full.md +0 -0
- 3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/images.zip +3 -0
- 3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/layout.json +3 -0
- 3dpropertiesidentifyingchallengesindpoandchartingapathforward/8fedd307-9532-4e32-ad92-bd929756427e_content_list.json +3 -0
- 3dpropertiesidentifyingchallengesindpoandchartingapathforward/8fedd307-9532-4e32-ad92-bd929756427e_model.json +3 -0
- 3dpropertiesidentifyingchallengesindpoandchartingapathforward/8fedd307-9532-4e32-ad92-bd929756427e_origin.pdf +3 -0
- 3dpropertiesidentifyingchallengesindpoandchartingapathforward/full.md +690 -0
- 3dpropertiesidentifyingchallengesindpoandchartingapathforward/images.zip +3 -0
- 3dpropertiesidentifyingchallengesindpoandchartingapathforward/layout.json +3 -0
- 3dspatialmultimodalmemory/dfa8acf4-63ec-4e98-bb58-719ef5f72492_content_list.json +3 -0
- 3dspatialmultimodalmemory/dfa8acf4-63ec-4e98-bb58-719ef5f72492_model.json +3 -0
- 3dspatialmultimodalmemory/dfa8acf4-63ec-4e98-bb58-719ef5f72492_origin.pdf +3 -0
- 3dspatialmultimodalmemory/full.md +304 -0
- 3dspatialmultimodalmemory/images.zip +3 -0
- 3dspatialmultimodalmemory/layout.json +3 -0
- 3dstreetunveilerwithsemanticaware2dgsasimplebaseline/3f23c088-7d80-4cce-b138-008f4d7a0b93_content_list.json +3 -0
- 3dstreetunveilerwithsemanticaware2dgsasimplebaseline/3f23c088-7d80-4cce-b138-008f4d7a0b93_model.json +3 -0
- 3dstreetunveilerwithsemanticaware2dgsasimplebaseline/3f23c088-7d80-4cce-b138-008f4d7a0b93_origin.pdf +3 -0
- 3dstreetunveilerwithsemanticaware2dgsasimplebaseline/full.md +562 -0
- 3dstreetunveilerwithsemanticaware2dgsasimplebaseline/images.zip +3 -0
- 3dstreetunveilerwithsemanticaware2dgsasimplebaseline/layout.json +3 -0
- 3dtrajmastermastering3dtrajectoryformultientitymotioninvideogeneration/676ebc6a-2e43-43ba-bc5e-c1ea226e8932_content_list.json +3 -0
- 3dtrajmastermastering3dtrajectoryformultientitymotioninvideogeneration/676ebc6a-2e43-43ba-bc5e-c1ea226e8932_model.json +3 -0
3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/f32a19e7-8ead-4470-8dcd-ee0e042554b2_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d26c8e36f2765941796c2882cdb9e03f572b0ace7051495607f0d36af76981bb
|
| 3 |
+
size 90224
|
3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/f32a19e7-8ead-4470-8dcd-ee0e042554b2_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:72d458bd3444bd3a51d28a3eee5c20b9f5ce53bc0ff5fafcb5e2173e18d1e91a
|
| 3 |
+
size 108998
|
3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/f32a19e7-8ead-4470-8dcd-ee0e042554b2_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67e837a46b99e2a0a59e059cc6e8baef289bb957d7f5afec99e25851d261f69e
|
| 3 |
+
size 2190517
|
3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/full.md
ADDED
|
@@ -0,0 +1,330 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D-AFFORDANCELLM: HARNESSING LARGE LANGUAGE MODELS FOR OPEN-VOCABULARY AFFORDANCE DETECTION IN 3D WORLDSD
|
| 2 |
+
|
| 3 |
+
Hengshuo Chu $^{1}$ , Xiang Deng $^{†,1}$ , Qi Lv $^{1}$ , Xiaoyang Chen $^{1}$ , Yinchuan Li $^{2}$ , Jianye Hao $^{2}$ , Liqiang Nie $^{†,1}$ $^{1}$ Harbin Institute of Technology (Shenzhen), $^{2}$ Huawei Noah's Ark Lab
|
| 4 |
+
|
| 5 |
+
# ABSTRACT
|
| 6 |
+
|
| 7 |
+
3D Affordance detection is a challenging problem with broad applications on various robotic tasks. Existing methods typically formulate the detection paradigm as a label-based semantic segmentation task. This paradigm relies on predefined labels and lacks the ability to comprehend complex natural language, resulting in limited generalization in open-world scene. To address these limitations, we reformulate the traditional affordance detection paradigm into Instruction Reasoning Affordance Segmentation (IRAS) task. This task is designed to output a affordance mask region given a query reasoning text, which avoids fixed categories of input labels. We accordingly propose the 3D-AffordanceLLM (3D-ADLLM), a framework designed for reasoning affordance detection in 3D open-scene. Specifically, 3D-ADLLM introduces large language models (LLMs) to 3D affordance perception with a custom-designed decoder for generating affordance masks, thus achieving open-world reasoning affordance detection. In addition, given the scarcity of 3D affordance datasets for training large models, we seek to extract knowledge from general segmentation data and transfer it to affordance detection. Thus, we propose a multi-stage training strategy that begins with a novel pre-training task, i.e., Referring Object Part Segmentation (ROPS). This stage is designed to equip the model with general recognition and segmentation capabilities at the object-part level. Then followed by fine-tuning with the IRAS task, 3D-ADLLM obtains the reasoning ability for affordance detection. In summary, 3D-ADLLM leverages the rich world knowledge and human-object interaction reasoning ability of LLMs, achieving approximately an $8\%$ improvement in mIoU on open-vocabulary affordance detection tasks.
|
| 8 |
+
|
| 9 |
+
# 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Robots are increasingly integrating into various aspects of our daily life (Matheson et al., 2019). As we progress toward developing the next generation of more advanced robotic agents, it is essential to enable robots to comprehend natural language instructions within context and to perceive task-relevant information in their surroundings. This skill is particularly vital for seamless interactions in unstructured environments, such as homes, where adaptability to diverse situations is crucial. Specifically, the robots need to not only identify the objects in the environments but also locate the specific regions of each object that are suitable for interaction: affordance.
|
| 12 |
+
|
| 13 |
+
The concept of affordance was introduced by ecological psychologist James Gibson (Gibson, 1966) and has since played a significant role in various robotic applications, including object recognition (Hong et al., 2023a; Hou et al., 2021), action anticipation (Roy & Fernando, 2021), agent activity recognition (Chen et al., 2023), and object functionality understanding (Li et al., 2023). Affordance describes potential interactions between robots and their environment, such as using a knife's blade for cutting tasks. Detecting affordances is challenging due to object diversity and complexity (Min et al., 2016). Traditionally, 2D images and CNNs are used (Nguyen et al., 2016; Do et al., 2018; Pacheco-Ortega & Mayol-Cuervas, 2022)(Krizhevsky et al., 2012), but 2D information
|
| 14 |
+
|
| 15 |
+
lacks the depth necessary for precise manipulation, necessitating 3D transformations (Deng et al., 2021).
|
| 16 |
+
|
| 17 |
+
With advanced depth cameras, 3D point clouds have become a widely used modality in robotic applications (Liu et al., 2019). Unlike conventional images, 3D point clouds offer robots direct and detailed 3D information about surrounding objects and environments. Hence, the 3D affordance detection has been deemed as a critical step in bridging perception and manipulation in the physical world for an embodied agent, thus has shown substantial impact on practical applications such as robotic manipulation (Geng et al., 2023; Moldovan et al., 2012). While current approaches are limited by fixed label sets (Deng et al., 2021; Mo et al., 2022), reducing flexibility and generalization in dynamic settings. To overcome the fixed label set problem in affordance detection, Nguyen et al. (Nguyen et al., 2023) have incorporated a text encoder to enable models to handle certain levels of open-vocabulary detection, but these algorithms still rely on a classification based training paradigm. As a result, they lack the ability for rapid and continuous learning when presented with new affordance label data. Furthermore, current affordance detection methods also heavily rely on the predefined labels and lack the ability to understand and reason over long contextual text. Additionally, the scarcity of 3D affordance datasets (Deng et al., 2021; Nguyen et al., 2023) constrains the effective training of large-scale models.
|
| 18 |
+
|
| 19 |
+
Towards these issues, we redefine the 3D affordance detection as an Instruction Reasoning Affordance Segmentation (IRAS) task and accordingly propose 3D-AffordanceLLM (3D-ADLLM). The IRAS task is designed to output an affordance mask region in response to complex, reasoning-based query text, overcoming the limitations of fixed affordance labels and the difficulty of understanding complex instructions. Our 3D-ADLLM framework introduces large language models (LLMs) to 3D affordance perception with a specifically designed decoder for generating affordance masks, thus achieving open-world reasoning affordance detection. Specifically, we introduce an additional token, $\langle \mathsf{AFF} \rangle$ , into the original LLM vocabulary. When the $\langle \mathsf{AFF} \rangle$ token is generated, its hidden embedding is further decoded into the corresponding segmentation mask. By representing the segmentation mask as an embedding, 3D-ADLLM not only gains segmentation capability but also benefits from end-to-end training. However, due to the scarcity of 3D affordance datasets for training large models, we propose a multi-stage training strategy to extract knowledge from general segmentation data and transfer it to affordance detection. This process involves pre-training on PartNet (Mo et al., 2019) with Referring Object Part Segmentation (ROPS) tasks to acquire the object-part level general recognition and segmentation knowledge. Subsequently, we fine-tune the model with the IRAS task to achieve context-aware reasoning ability and robust performance in open-set zero-shot affordance detection.
|
| 20 |
+
|
| 21 |
+
Our main contributions are summarized as follows:
|
| 22 |
+
|
| 23 |
+
- Different from the existing affordance detection methods that rely on fixed sets of labels, we address this limitation by introducing a new detection paradigm based on the Instruction Reasoning Affordance Segmentation (IRAS) task. By reforming the label-based semantic segmentation task in the traditional affordance detection paradigm into a natural language-driven reasoning affordance segmentation task, our model enables more flexible and context-aware reasoning, facilitating effective zero-shot learning capabilities.
|
| 24 |
+
|
| 25 |
+
- To address the IRAS tasks driven by semantic complex natural language, we consequently propose the 3D AffordanceLLM (3D-ADLLM) model, combining a large language model (LLM) with a carefully designed Affordance Decoder. Our 3D-ADLLM framework can understand semantically-rich, long-context instructions and leverages the LLM's world knowledge for superior open-vocabulary affordance detection.
|
| 26 |
+
|
| 27 |
+
- Due to the scarcity of 3D affordance datasets for training large models, we propose a multi-stage training strategy to transfer general segmentation knowledge into affordance detection. First, the model is equipped with general recognition and segmentation knowledge through a novel pretraining task, i.e., the Referring Object Part Segmentation (ROPS). Subsequently, the model is fine-tuned with the IRAS task to handle context-aware reasoning and affordance region prediction.
|
| 28 |
+
|
| 29 |
+
# 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
Affordance Detection. Originating from the 2D domain, initial work in affordance detection primarily focused on identifying objects with affordances (Do et al., 2018). Building on this foundation, later studies (Lu et al., 2022) introduced linguistic descriptions to improve detection, but they continued to emphasize object-level affordances, lacking fine-grained analysis. Addressing this problem, subsequent research (Chen et al., 2023; Li et al., 2023; Luo et al., 2022; Nagarajan et al., 2019; Mi et al., 2020) has focused on detecting specific affordance parts, establishing a new benchmark for precision in the field. With the advancement of embodied AI, the scope of affordance learning has expanded into 3D domain. 3D AffordanceNet (Deng et al., 2021) introduces the first benchmark dataset for learning affordance from object point clouds. IAGNet (Yang et al., 2023) propose a setting for learning 3D affordance parts guided by image queries. Recently, some work (Nguyen et al., 2023) also explores the open-vocabulary affordance detection in point clouds. However, these methods primarily focus on linking object geometric features with fixed affordance labels, overlooking the semantic aspect. This limitation makes it challenging to understand natural language instructions and hampers the ability to generalize affordance detection to unseen scenarios. In contrast, the proposed 3D-ADLLM overcomes the limitations of fixed label sets and enhance the ability to comprehend semantic complex description. Specifically, we shift the detection paradigm from label-based semantic segmentation into Instruction Reasoning Affordance Segmentation (IRAS).
|
| 32 |
+
|
| 33 |
+
3D Large Multi-Modal Models. 3D object-level LMMs (Yu et al., 2022; Xue et al., 2023; Zhou et al., 2023) have successfully bridged the gap between 3D vision and text by leveraging large-scale 3D object datasets like (Deitke et al., 2023; Vishwanath et al., 2009). ShapeLLM (Qi et al., 2024) further advances the embodied interaction and referring expression grounding through its novel and powerful point encoder. However, despite these advances, such models still face challenges in interpreting complex spatial relationships within 3D scenes. For scene-level LMMs, models like Chat-3D (Wang et al., 2023) and LL3DA (Chen et al., 2024) enable interaction with scene objects using pre-selection mechanisms. Building on this foundation, Chat-3D v2 (Huang et al., 2023) enhances referencing and grounding accuracy by incorporating object identifiers, while 3D-LLM (Hong et al., 2023b) improves scene comprehension by integrating positional embeddings and location tokens. Unlike previous works that primarily focus on 3D grounding and understanding, our method introduces a specialized token, $\langle \mathrm{AFF} \rangle$ , which enables LLMs to directly detect affordances and generate affordance masks within 3D open-world scene.
|
| 34 |
+
|
| 35 |
+
# 3 METHOD
|
| 36 |
+
|
| 37 |
+
# 3.1 PARADIGM REFORMULATION
|
| 38 |
+
|
| 39 |
+
Affordance detection aims to identify specific regions of objects that are suitable for interaction. It has been deemed as a critical step in bridging perception and manipulation in the physical world for embodied agents. As illustrated in Fig. 1 (a), the traditional paradigm uses a shared point backbone (Qi et al., 2017; Zhao et al., 2021; Wang et al., 2019) to extract point-wise features, and generates masks with a predefined type semantic segmentation head. Alternatively, they leverage a text encoder like CLIP (Radford et al., 2021) to associate point-wise features with text embeddings of affordance labels using cosine similarity, achieving limited open-vocabulary detection on the phrase level. This paradigm relies on predefined labels and has a limited ability to understand complex natural language, which restricts its generalization in 3D open-world scene.
|
| 40 |
+
|
| 41 |
+
To address these limitations, we introduce a new paradigm formulated as an Instruction Reasoning Affordance Segmentation (IRAS) task as depicted in Fig. 1 (b). This paradigm is designed to establish a robust connection between language context and object affordance, avoiding the overreliance on auxiliary affordance label prediction. This approach facilitates a significant improvement in our ability to understand and interact with the physical world.
|
| 42 |
+
|
| 43 |
+
IRAS Definition. Given a query reasoning instruction $Q_{a}$ and an object point cloud $P_{c} \in \mathbb{R}^{n\times 3}$ with $N$ points, the goal of IRAS is to predict a binary mask of $M_{a} \in \mathbb{R}^{N}$ that delineates the functional regions pertinent to the query, affordance regions:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
F _ {M o d e l} (Q _ {a}, P _ {c}) \Rightarrow M _ {a}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
Figure 1: The comparison of the affordance detection paradigm based on our IRAS or traditional label-based segmentation tasks. (a) shows that label-based paradigm can only detect the fixed set of affordance regions through the predefined label and seg-head; (b) demonstrates the IRAS based paradigm forges a link between semantic complex instruction and object affordance, enabling open-world reasoning affordance detection.
|
| 51 |
+
|
| 52 |
+
# 3.2 3D-AFFORDANCELLM
|
| 53 |
+
|
| 54 |
+
To the traditional methods that rely on fixed label sets and are limited to short-text detection, IRAS demands robust language comprehension and reasoning to associate the potential affordance in input query with 3D objects areas. Thus, we incorporate large language models (LLMs) into 3D affordance perception. LLMs, trained on trillions of tokens, excel in understanding and reasoning about instructions and possess extensive world knowledge. For instance, when asked where to interact with a mug to grasp it, LLMs suggests using the handle for a firm grip to avoid spilling. This demonstrates LLMs' world knowledge and the capability in understanding human-object interactions. To harness this capability for 3D affordance perception, we introduce the 3D AffordanceLLM Model, aiming to improve affordance detection in previously unseen contexts.
|
| 55 |
+
|
| 56 |
+
Our framework, 3D AffordanceLLM, as illustrated in Fig. 2, primarily consists of two main components: (1) a point cloud multimodal model which is trained to accept point cloud and text inputs and generate response, including a special token, $\langle \mathsf{AFF} \rangle$ ; (2) an Affordance Decoder (AFD), which extracts hidden layer features from these $\langle \mathsf{AFF} \rangle$ tokens and combines them with segmentation point features to generate affordance masks.
|
| 57 |
+
|
| 58 |
+

|
| 59 |
+
Figure 2: The Pipeline of 3D-ADLLM. Given the input point cloud and query reasoning instruction, the point cloud multimodal model is trained with lora to predict special token $\langle \mathsf{AFF} \rangle$ . Finally, the special token and dense point features from $f_{\mathrm{PB}}$ is fed into our designed affordance decoder to generate the final affordance mask.
|
| 60 |
+
|
| 61 |
+
# 3.2.1 MODEL ARCHITECTURE
|
| 62 |
+
|
| 63 |
+
As is shown in Fig. 2, our 3D AffordanceLLM consists of the following modules: a pre-trained point cloud encoder $f_{\mathrm{pe}}$ , a projector $f_{\mathrm{proj}}$ , a point backbone $f_{\mathrm{PB}}$ , an affordance decoder $f_{\mathrm{AFD}}$ and a pre-trained large language model (LLM) backbone $f_{\mathrm{llm}}$ .
|
| 64 |
+
|
| 65 |
+
Point Encoder. The point cloud encoder $f_{\mathrm{pe}}$ , takes a point cloud $\mathbf{P}_{\mathrm{cloud}} \in \mathbb{R}^{n \times d}$ as input, where $n$ represents the number of points and $d$ denotes the feature dimension of each point. The output of the encoder is a sequence of point features $X = (x_{1}, x_{2}, \ldots, x_{\mathrm{m}}) \in \mathbb{R}^{m \times c}$ , where $m$ is the number of point features and $c$ is the feature dimension. Similarly, the point backbone $f_{\mathrm{PB}}$ , also processes input point cloud $\mathbf{P}_{\mathrm{cloud}} \in \mathbb{R}^{n \times d}$ , extracting the dense point cloud features $X' = (x_{1}', x_{2}', \ldots, x_{\mathrm{n}}') \in \mathbb{R}^{n \times c'}$ , specifically tailored for segmentation tasks. These features are subsequently fed into the Affordance Decoder.
|
| 66 |
+
|
| 67 |
+
LLM Projector. The projector $f_{\mathrm{proj}}$ is a MLP layer that maps the point features $X$ to point tokens $Y = (y_{1}, y_{2}, \ldots, y_{m}) \in \mathbb{R}^{m \times c''}$ , where $c''$ is the dimension of the point tokens, matching the dimension of the text tokens.
|
| 68 |
+
|
| 69 |
+
Large Language Model. The LLM backbone $f_{\mathrm{llm}}$ is a decoder-only Transformer model (Vaswani et al., 2017), which processes a sequence of tokens comprising text and point tokens. This mixed token sequence is denoted as $Z = (z_{1}, z_{2}, \ldots, z_{k}) \in \mathbb{R}^{k \times c''}$ , where $k$ is the total number of tokens. Leveraging a self-attention mechanism, the LLM backbone captures contextual relationships between different token types, enabling it to generate responses based on both text and point cloud inputs. Formally, the output of the LLM backbone $f_{\mathrm{llm}}$ is a sequence of predicted tokens $\hat{Z} = (\hat{z}_{1}, \hat{z}_{2}, \ldots, \hat{z}_{k}) \in \mathbb{R}^{k \times c''}$ . The prediction of the $i$ -th token, $\hat{z}_{i}$ , is conditioned on all previous tokens, $Z_{<i} = (z_{1}, \ldots, z_{i-1})$ , which can be expressed mathematically as:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\hat {z} _ {i} = f _ {\mathrm {l l m}} (Z _ {< i}).
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
Each $\hat{z}_i$ is passed through a final linear layer followed by a softmax operation, which maps the hidden states to a probability distribution over the vocabulary. This layer is denoted as $f_{\mathrm{vocab}}: \mathbb{R}^{c'} \to \mathbb{R}^V$ , where $V$ is the size of the vocabulary. The final prediction $\tilde{z}_i$ for the $i$ -th token is the word in the vocabulary with the highest probability, expressed as:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\tilde {z} _ {i} = \arg \max _ {w \in \mathrm {v o c a b}} f _ {\mathrm {v o c a b}} (\hat {z} _ {i}) [ w ].
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
Affordance Decoder. Building on the success of learnable query-based methods in object segmentation, we introduce an Affordance Decoder Module (AFD) that leverages a set of learnable output queries conditioned on input questions, termed affordance queries $T_{\mathrm{a}}$ to decode segmentation masks. A two-layer decoder updates both the point features and the question features via cross-attention. Then, the updated query tokens and point features are used to dynamically predict affordance masks.
|
| 82 |
+
|
| 83 |
+
# 3.2.2 EMBEDDING AS AFFORDANCE
|
| 84 |
+
|
| 85 |
+
Unlike conventional tasks such as grounding, question answering, etc., within the realm of 3D large multi-modal models (LMMs), the IRAS task is depicted to generate a affordance segmentation mask directly given a reasoning query. Most current 3D LLM (such as 3D-LLM (Hong et al., 2023a), ShapeLLM (Qi et al., 2024) support 3D scenes or objects and text as input, but they can only output text or bbox and cannot directly output fine-grained segmentation masks. Inspired by the LISA model (Lai et al., 2024), which directly outputs the segmentation mask in the 2D domain, we adopt a similar idea in 3D affordance detection. To achieve that, we propose the embedding-as-affordance paradigm to inject new affordance segmentation capabilities into the 3D LMM. The pipeline of our method is illustrated in Fig. 2. Specifically, we expand the original LLM vocabulary by adding a new token, $\langle \mathrm{AFF} \rangle$ , which signals a request for an affordance output. Given a complex reasoning instruction query $\mathbf{Q}_{\mathrm{aff}}$ and a point cloud input $\mathbf{P}_{\mathrm{cloud}}$ , we feed them into the multimodal point clouds LLM $F_{\mathrm{3D - ADLLM}}$ , which outputs a text response $\hat{\mathbf{y}}_{\mathrm{txt}}$ : "Sure, it is $\langle \mathrm{AFF} \rangle$ ." This process can be formulated as:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\hat {\mathbf {y}} _ {\mathrm {t x t}} = F _ {\mathrm {3 D - A D L L M}} (\mathbf {P} _ {\mathrm {c l o u d}}, \mathbf {Q} _ {\mathrm {a f f}}).
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
When the LLM intends to generate a binary affordance mask, the output $\hat{\mathbf{y}}_{\mathrm{txt}}$ would include a <AFF> token. We then extract the LLM last-layer embedding $\tilde{\mathbf{h}}_{\mathrm{aff}}$ corresponding to the <AFF> token and apply an MLP projection layer Proj to obtain $\mathbf{h}_{\mathrm{aff}}$ . Simultaneously, the point cloud backbone $f_{\mathrm{PB}}$ extracts the dense point clouds features $\mathbf{f}$ from the points input $\mathbf{P}_{\mathrm{cloud}}$ . Finally, $\mathbf{h}_{\mathrm{aff}}$ and $\mathbf{f}$ are fed to the decoder $f_{\mathrm{AFD}}$ to produce the final affordance mask $\hat{\mathbf{M}}_{\mathrm{aff}}$ . The process can be formulated as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\begin{array}{l} \mathbf {h} _ {\mathrm {a f f}} = \operatorname {P r o j} (\tilde {\mathbf {h}} _ {\mathrm {a f f}}) \\ \mathbf {f} = f _ {\mathrm {P B}} (\mathbf {P} _ {\mathrm {c l o u d}}), \hat {\mathbf {M}} _ {\mathrm {a f f}} = f _ {\mathrm {A F D}} (\mathbf {h} _ {\mathrm {a f f}}, \mathbf {f}). \\ \end{array}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
(a) Pre-training Stage: Extracting General Segmentation Knowledge
|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
(b) Specific Task Finetuning: Transferring Knowledge to Affordance Detection Knowledge Transfer
|
| 102 |
+
Figure 3: Multi-stage training strategy. Illustration of transferring general segmentation knowledge to affordance detection. (a) depicts the process of extracting general segmentation knowledge, while (b) illustrates the framework for transferring this knowledge to affordance detection
|
| 103 |
+
|
| 104 |
+
# 3.3 MULTI-STAGE TRAINING
|
| 105 |
+
|
| 106 |
+
Existing 3D affordance datasets, such as 3D AffordanceNet datasets, OpenAD datasets in (Deng et al., 2021; Nguyen et al., 2023), are constrained in availability and dataset sizes. Thus, given the scarcity of 3D affordance datasets for training large models, we devise a multi-stage training strategy which extracts knowledge from general segmentation data and transfers it to IRAS affordance detection. In addition, due to the varying scales of target affordance regions, we propose a sample unbalanced loss factor to enhance the model's learning effectiveness and adaptability across different region scales.
|
| 107 |
+
|
| 108 |
+
# 3.3.1 EXTRACTING GENERAL SEGMENTATION KNOWLEDGE
|
| 109 |
+
|
| 110 |
+
Considering the limited amounts of affordance datasets for training large models, this stage aims to leverage general datasets to equip the model with general recognition and segmentation capabilities at the object-part level. Thus, we introduce Referring Object Part Segmentation (ROPS) task to acquire the general knowledge.
|
| 111 |
+
|
| 112 |
+
ROPS Definition. Given a referring expression that denotes the name of the object's components $Q$ and an object point cloud $P_{c} \in \mathbb{R}^{n \times 3}$ consisting of $N$ points, the objective of ROPS is to predict a binary mask for $M_{p} \in \mathbb{R}^{N}$ that corresponds to the query:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
F _ {M o d e l} (Q _ {p}, P _ {c}) \Rightarrow M _ {p}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
In the pre-training phase, we employ the framework in Fig. 3 (a) to train the ROPS task on the PartNet dataset (Mo et al., 2019). As depicted in Fig. 3 (a), the object point cloud is processed by a trainable backbone to extract point features $\mathbf{f}_{\mathrm{P_{cloud}}}$ . The object part descriptions are encoded using a frozen text encoder to generate text features $\mathbf{f}_{\mathrm{Q_{part}}}$ , which are then mapped via an offline MLP layer to produce $\mathbf{f}'_{\mathrm{Q_{part}}}$ . Finally, $\mathbf{f}'_{\mathrm{Q_{part}}}$ and $\mathbf{f}_{\mathrm{P_{cloud}}}$ are passed into the Mask Decoder to generate the final part mask $\mathbf{M}_{\mathrm{part}}$ , formulated as:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\mathbf {M} _ {\text {p a r t}} = \operatorname {M a s k D e c o d e r} \left(\mathbf {f} ^ {\prime} _ {\mathrm {Q} _ {\text {p a r t}}}, \mathbf {f} _ {\mathrm {P} _ {\text {c l o u d}}} \right)
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
Training Objectives. Unlike (Mo et al., 2019), which uses a multi-class head for prediction, our stragety seeks to extract the knowledge relationship between referring object part description and the corresponding mask regions. Thus, we solely employ Dice Loss and Binary CrossEntropy (BCE) loss to guide the segmentation mask prediction.
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\mathcal {L} = \lambda_ {1} \mathcal {L} _ {\mathrm {B C E}} + \lambda_ {2} \mathcal {L} _ {\mathrm {D I C E}}.
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
# 3.3.2 TRANSFERRING KNOWLEDGE TO AFFORDANCE DETECTION
|
| 131 |
+
|
| 132 |
+
Building upon the extensive segmentation knowledge acquired from the ROPS task, we transfer this knowledge to affordance detection by IRAS finetuning to enhance the model's generalization. We
|
| 133 |
+
|
| 134 |
+
also propose a sample unbalanced loss factor to address the learning strategies for affordance regions of different scales. Specifically, during IRAS fine-tuning: we use the pretrained checkpoint $W_{\mathrm{f_{PB}}}$ and $W_{\mathrm{f_{MD}}}$ to initialize the modules $f_{\mathrm{PB}}$ and $f_{\mathrm{AFD}}$ in our framework 3D-ADLLM as shown in Fig. 2. We then use the Lora method to fine-tune a pre-trained LLM for affordance segmentation.
|
| 135 |
+
|
| 136 |
+
Training Objectives. The model is trained end-to-end using text generation loss $\mathcal{L}_{\mathrm{txt}}$ and segmentation mask loss $\mathcal{L}_{\mathrm{mask}}$ . Specifically, $\mathcal{L}_{\mathrm{txt}}$ encourage the LLMs generate the response including the <AFF> token and forces the features to map to the same <AFF> placeholder. Then, $\mathcal{L}_{\mathrm{mask}}$ diversifies the <AFF> features to contain the information for affordance prediction and guides affordance mask generation. The overall objective $\mathcal{L}$ is the weighted sum of these losses, determined by $\lambda_{\mathrm{txt}}$ and $\lambda_{\mathrm{mask}}$ :
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\mathcal {L} = \lambda_ {\mathrm {t x t}} \mathcal {L} _ {\mathrm {t x t}} + \lambda_ {\mathrm {m a s k}} \mathcal {L} _ {\mathrm {m a s k}}.
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
Specifically, $\mathcal{L}_{\mathrm{txt}}$ is the auto-regressive cross-entropy loss for text generation, and $\mathcal{L}_{\mathrm{mask}}$ is the mask loss for high-quality segmentation. To compute $\mathcal{L}_{\mathrm{mask}}$ , we use a combination of per-pixel BCE loss and DICE Loss, with weights $\lambda_{\mathrm{bce}}$ and $\lambda_{\mathrm{dice}}$ . Given the ground-truth targets $\mathbf{y}_{\mathrm{txt}}$ and M obtained from dataset, these losses are formulated as:
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
\mathcal {L} _ {\mathrm {t x t}} = \mathbf {C E} (\hat {\mathbf {y}} _ {\mathrm {t x t}}, \mathbf {y} _ {\mathrm {t x t}}),
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\mathcal {L} _ {\text {m a s k}} = \lambda_ {\text {b c e}} \mathbf {B C E} (\hat {\mathbf {M}}, \mathbf {M}) + \lambda_ {\text {d i c e}} \mathbf {D I C E} (\hat {\mathbf {M}}, \mathbf {M}).
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
Sample Unbalanced Loss Factor. Due to the varying scales of target affordance regions, our model 3D-ADLLM naturally challenges model's adaptiveness at different scales. This variability will results in an imbalance in the difficulty of learning samples between different affordance types during the training process. To mitigate the issue of sample imbalance across different affordance types during training, we apply weights to the mask losses for each class. The weighted loss is defined as: $\mathcal{L}_{\mathrm{mask}} = \sum_{i=1}^{n} \omega_i \mathcal{L}_{\mathrm{mask}}^i$ . The weight $\omega_i$ is calculated as:
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\omega_ {i} = \left(\frac {\max \{c _ {1} , c _ {2} , \ldots , c _ {m} , c _ {0} \}}{c _ {i}}\right) ^ {1 / 4}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
where $c_{i}$ is the number of ground truth points for class $i$ , and $c_{0}$ denotes background points.
|
| 159 |
+
|
| 160 |
+
# 4 EXPERIMENT
|
| 161 |
+
|
| 162 |
+
# 4.1 EXPERIMENT SETTING
|
| 163 |
+
|
| 164 |
+
Network Architecture. We use Phi-3.5-mini-instruct $(f_{\mathrm{llm}})$ (Abdin et al., 2024) as our base LLM. For the point encoder $(f_{\mathrm{pe}})$ , we adopt Point-BERT (Yu et al., 2022), pre-trained with ULIP-2 (Xue et al., 2024) in the ModelNet dataset (Vishwanath et al., 2009). The projector layer $(f_{\mathrm{proj}})$ between the point encoder $f_{\mathrm{pe}}$ and the LLM $f_{\mathrm{llm}}$ is a linear layer. Additionally, we utilize the Point Transformer (Zhao et al., 2021) as the backbone for our point segmentation model $(f_{\mathrm{PB}})$ .
|
| 165 |
+
|
| 166 |
+
Datasets. As is mentioned in Sec. 3.3, our training data is made up of two types of task data: (1) Referring Object Part Segmentation Dataset: we build this dataset on PartNet (Mo et al., 2019), which contains 573,585 part instances across 25,571 3D models and 24 object categories. For pretraining, we split it into single-part segmentation instances. (2) Instruction Reasoning Affordance Segmentation Dataset: we meticulously compile a question-point affordance dataset with 42119 paired samples from 3D AffordanceNet dataset (Deng et al., 2021), covering 23 classes and 36 affordance types. The Detailed data settings (full or partial view) and the visualized data analysis can be seen in Appendix Sect. A.3.
|
| 167 |
+
|
| 168 |
+
Baseline Models. We compare our method with the following recent methods for zero-shot learning in 3D point clouds: ZSLPC (Cheraghian et al., 2019), TZSLPC (Cheraghian et al., 2020), 3DGenZ (Michele et al., 2021), OpenAD (Nguyen et al., 2023), IAGNet (Yang et al., 2023), LASO (Li et al., 2024) and ShapeLLM (Qi et al., 2024). Detailed baseline model explanation for experiments can be found in Appendix Sect. A.1.
|
| 169 |
+
|
| 170 |
+
Evaluation metrics. We divide the IRAS dataset following the split of OpenAD and evaluate the close-set and open-set of IRAS. According to Nguyen et al. (2023), we use three metrics to evaluate the results over all classes: $\mathrm{mIoU}^{\mathrm{c}}$ (mean IoU over all classes), $\mathrm{Acc}^{\mathrm{c}}$ (overall accuracy over all
|
| 171 |
+
|
| 172 |
+
Table 1: Main results of 3D-ADLLM on zero-short open vocabulary detection. The result is calculated over all classes. The overall results of all comparative methods, the best results are in bold. * The method of ShapeLLM is tested without finetuning.
|
| 173 |
+
|
| 174 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Full-view</td><td colspan="3">Partial-view</td></tr><tr><td>mIoUc</td><td>Accc</td><td>mAccc</td><td>mIoUc</td><td>Accc</td><td>mAccc</td></tr><tr><td>TZSLPC (Cheraghian et al., 2020)</td><td>3.86</td><td>-</td><td>10.37</td><td>4.14</td><td>-</td><td>8.49</td></tr><tr><td>3DGenZ (Michele et al., 2021)</td><td>6.46</td><td>-</td><td>18.33</td><td>6.03</td><td>-</td><td>15.86</td></tr><tr><td>ZSLPC (Cheraghian et al., 2019)</td><td>9.97</td><td>-</td><td>18.70</td><td>9.52</td><td>-</td><td>17.16</td></tr><tr><td>ShapeLLM* (Qi et al., 2024)</td><td>0.88</td><td>0.28</td><td>0.99</td><td>1.49</td><td>1.35</td><td>1.70</td></tr><tr><td>OpenAD-PointNet++ (Nguyen et al., 2023)</td><td>13.53</td><td>3.97</td><td>16.40</td><td>11.29</td><td>2.41</td><td>13.88</td></tr><tr><td>OpenAD-DGCNN (Nguyen et al., 2023)</td><td>11.15</td><td>3.84</td><td>13.86</td><td>8.04</td><td>1.58</td><td>9.85</td></tr><tr><td>IAGNet (Yang et al., 2023)</td><td>16.16</td><td>19.07</td><td>23.92</td><td>14.36</td><td>16.90</td><td>21.73</td></tr><tr><td>LASO (Li et al., 2024)</td><td>22.41</td><td>15.90</td><td>30.22</td><td>20.06</td><td>8.80</td><td>26.84</td></tr><tr><td>Ours-Qwen</td><td>24.43</td><td>23.90</td><td>35.45</td><td>26.25</td><td>29.5</td><td>41.57</td></tr><tr><td>Ours-Phi</td><td>30.43</td><td>29.36</td><td>47.78</td><td>27.25</td><td>27.87</td><td>39.04</td></tr></table>
|
| 175 |
+
|
| 176 |
+
points), and $\mathrm{mAcc}^{\mathrm{c}}$ (mean accuracy over all classes). However, unlike OpenAD, which includes the "none" category in the calculation of metrics, we only compute the 36 affordance types, excluding "none," as it has little comparative significance. For a comprehensive evaluation versus existing methods, we additionally assess each instance across the entire dataset. The specific evaluation metrics over all instances: $\mathrm{mIoU}^{\mathrm{i}}$ (mean IoU over all instance data), $\mathrm{mAcc}^{\mathrm{i}}$ (mean accuracy of points over all instance data), $\mathrm{mPrec}^{\mathrm{i}}$ (mean precision of points over all instance data), $\mathrm{mRec}^{\mathrm{i}}$ (mean recall of points over all instance data), $\mathrm{mAP}_{50}^{\mathrm{i}}$ (mean average precision at $50\%$ intersection over union).
|
| 177 |
+
|
| 178 |
+
# 4.2 EXPERIMENT RESULTS
|
| 179 |
+
|
| 180 |
+
# 4.2.1 COMPARISON RESULTS
|
| 181 |
+
|
| 182 |
+
3D-ADLLM vs. Other Models. Table 1 demonstrates that our 3D-ADLLM achieves superior performance across both full and partial view tasks, as well as on all three evaluation metrics. Notably, 3D AffordanceLLM significantly outperforms the runner-up model (LASO) in terms of mIoU, with improvements of $8.02\%$ and $7.19\%$ on the full and partial view tasks, respectively. Compared to OpenAD, which predicts regions based on a fixed set of affordance labels, our method utilizes long-context understanding and reasoning for segmentation. In experiment results, our method surpasses OpenAD in terms of mIoU $16.9\%$ (full-view) and $15.96\%$ (partial-view) separately across 18 affordance types. Additionally, for metrics over all instances, we surpass the sota model (LASO) $23.38\%$ (full-view) and $24.93\%$ (partial-view) in $\mathrm{mAP}_{50}$ . The comparison results on close-set detection can be found in Appendix Sect. A.2.
|
| 183 |
+
|
| 184 |
+
# 4.2.2 OUT-OF-DISTRIBUTION RESULTS
|
| 185 |
+
|
| 186 |
+
The test in out-of-distribution (ood) datasets is essential to assess the generalization capability of the model. Thus, we constructed a new test dataset consisting of approximately 559 entries by filtering out some combinations of affordance-object that already existed in our IRAS dataset from the AffordPose dataset (Jian et al., 2023). Compared to existing datasets, this new dataset includes different types of affordances as well as unique affordance-object pairs, such as (twist, faucet), (lever, faucet), (press, dispenser), etc. As is shown in Table 3, our approach achieved the best zero-shot performance on this ood data.
|
| 187 |
+
|
| 188 |
+
# 4.3 ABLATION STUDY
|
| 189 |
+
|
| 190 |
+
Effects of Different Components. To investigate the effectiveness of each component in 3D-ADLLM, we conduct experiments with different variants of 3D-ADLLM. In particular, we compare 2 different implementations: (1) w/o PC removes the pre-trained weights $f_{\mathrm{PB}}$ and $f_{\mathrm{AFD}}$ , directly training our 3D-ADLLM; (2) w/o UL removes the sample unbalanced factor. As is shown in Table 6,
|
| 191 |
+
|
| 192 |
+
Table 2: Zero-shot Open-vocabulary detection results on over all instances.
|
| 193 |
+
|
| 194 |
+
<table><tr><td></td><td>Method</td><td>mIoU1</td><td>mAcc1</td><td>mPrec1</td><td>mRec1</td><td>mAP50</td></tr><tr><td rowspan="4">Full-view</td><td>OpenAD-PointNet++</td><td>3.46</td><td>74.59</td><td>11.84</td><td>5.84</td><td>0.02</td></tr><tr><td>OpenAD-DGCNN</td><td>3.79</td><td>74.42</td><td>11.13</td><td>6.67</td><td>0.04</td></tr><tr><td>LASO</td><td>20.47</td><td>71.47</td><td>37.95</td><td>34.93</td><td>2.42</td></tr><tr><td>3D-ADLLM (ours)</td><td>30.28</td><td>70.66</td><td>40.89</td><td>55.93</td><td>27.80</td></tr><tr><td rowspan="4">Partial-view</td><td>OpenAD-PointNet++</td><td>2.17</td><td>71.97</td><td>5.64</td><td>3.74</td><td>0.02</td></tr><tr><td>OpenAD-DGCNN</td><td>2.08</td><td>72.00</td><td>6.65</td><td>3.40</td><td>0.02</td></tr><tr><td>LASO</td><td>11.46</td><td>72.14</td><td>32.70</td><td>16.49</td><td>0.70</td></tr><tr><td>3D-ADLLM (ours)</td><td>28.72</td><td>68.28</td><td>41.71</td><td>47.73</td><td>25.63</td></tr></table>
|
| 195 |
+
|
| 196 |
+
Table 3: Zero-shot Open-vocabulary detection results on AffordPose data over all instances.
|
| 197 |
+
|
| 198 |
+
<table><tr><td>Method</td><td>mIoU1</td><td>mAcc1</td><td>mPrec1</td><td>mRec1</td><td>mAP50</td></tr><tr><td>OpenAD-PointNet++</td><td>7.61</td><td>65.13</td><td>22.47</td><td>13.01</td><td>0.37</td></tr><tr><td>OpenAD-DGCNN</td><td>8.02</td><td>66.76</td><td>15.83</td><td>13.52</td><td>0.39</td></tr><tr><td>LASO</td><td>34.49</td><td>77.12</td><td>56.04</td><td>37.88</td><td>8.40</td></tr><tr><td>3D-ADLLM (ours)</td><td>36.33</td><td>74.79</td><td>55.46</td><td>46.80</td><td>36.33</td></tr></table>
|
| 199 |
+
|
| 200 |
+
the performance of 3D-ADLLM drops significantly without either of these components. Notably, the most substantial performance degradation with about $6\%$ occurs in mIoU when the PC module is removed. UL is also critical for our framework. Once it is removed, the performance, there is a noticeable reduction in the model's performance.
|
| 201 |
+
|
| 202 |
+
Effects of Different Backbones. As shown in Table 1, we experimented with different LLM backbones to evaluate the effectiveness of our framework. Specifically, we chose Phi-3.5-mini-instruct (Abdin et al., 2024) and Qwen2-1.5B (Yang et al., 2024) as the LLM backbone.
|
| 203 |
+
|
| 204 |
+
In terms of experimental results, Phi outperforms Qwen in the full-view setting. However, in the partial-view setting, the performance of Phi shows no significant difference compared to Qwen. Based on these findings, 3D-ADLLM adopts Phi as the default LLM backbone. In addition to testing different LLM backbones, we also explored different point encoders. Table 4 summarizes the performance of ULIP2 (Xue et al., 2024) and Uni3D (Zhou et al., 2023) as point encoders, while ULIP2 obtained slightly better mean accuracy.
|
| 205 |
+
|
| 206 |
+
Table 4: The efforts with different point encoder ${f}_{\mathrm{{pe}}}$ in 3D-ADLLM.(FullView)
|
| 207 |
+
|
| 208 |
+
<table><tr><td>fpe</td><td>mIoUc</td><td>Accc</td><td>mAcc</td></tr><tr><td>ULIP2</td><td>30.43</td><td>29.36</td><td>47.78</td></tr><tr><td>Uni3D</td><td>30.26</td><td>26.21</td><td>48.16</td></tr></table>
|
| 209 |
+
|
| 210 |
+
Effects of Different Learning Objectives. We define the Affordance Region Ratio (Arr) as $p_{\mathrm{aff}} / p_{\mathrm{cloud}}$ , representing the proportion of affordance regions relative to the point cloud. In the IRAS task, the average Arr is approximately $18\%$ . However, for specific categories like "pull" and "listen," it is around $5\%$ , while for "wear" it reaches about $40\%$ . Variations in Arr across different predictions lead to class imbalances. Dice Loss, a segmentation loss function, measures the similarity between predictions and ground truth. Unlike Binary Cross-Entropy Loss (BCE), which focuses on pixel-level differences, Dice Loss emphasizes global region similarity, making it more effective for handling data imbalance. As shown in Table 5, the model utilizing Dice Loss achieves superior mIoU metrics in both seen and unseen settings. Table 5 demonstrates that while the exclusive application of Dice Loss yields a marginal improvement on unseen data, it does not perform as well on seen data when compared to the combined usage of Dice Loss and BCE Loss.
|
| 211 |
+
|
| 212 |
+
Table 5: The comparison results regarding different settings of loss.(full-view)
|
| 213 |
+
|
| 214 |
+
<table><tr><td></td><td>Openset-mIoUc</td><td>Closeset-mIoUc</td></tr><tr><td>DICE & BCE</td><td>30.43</td><td>42.35</td></tr><tr><td>DICE</td><td>31.00</td><td>38.65</td></tr><tr><td>BCE</td><td>15.99</td><td>31.14</td></tr></table>
|
| 215 |
+
|
| 216 |
+
Table 6: Results of 3D-ADLLM variants with removing different components.(full-view)
|
| 217 |
+
|
| 218 |
+
<table><tr><td>Model</td><td>mIoUc</td><td>Accc</td><td>mAcc</td></tr><tr><td>3D-ADLLM</td><td>30.43</td><td>29.36</td><td>47.78</td></tr><tr><td>3D-ADLLM w/o PC</td><td>24.82</td><td>20.54</td><td>36.73</td></tr><tr><td>3D-ADLLM w/o UL</td><td>25.35</td><td>26.84</td><td>40.69</td></tr></table>
|
| 219 |
+
|
| 220 |
+
# 4.4 QUALITATIVE RESULTS
|
| 221 |
+
|
| 222 |
+
As shown in Fig. 4, our model demonstrates the capacity to accurately comprehend object affordance given the complex reasoning instruction. It is noteworthy that even when dealing with small affordance components, such as the switch of faucet, our model still exhibits decent ability. Moreover, our 3D-ADLLM surpasses other models by employing a multi-stage training strategy that facilitates knowledge transfer and extraction of world knowledge from LLMs. For example, when identifying areas on a chair that can take a seat Fig. 4 (e) or areas that can wrap around a cup Fig. 4 (g), our model significantly outperforms other models.
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
Figure 4: The visualization results of our 3D-ADLLM compared with others.
|
| 226 |
+
|
| 227 |
+
# 5 CONCLUSION
|
| 228 |
+
|
| 229 |
+
In this work, we reformulate the traditional affordance detection paradigm into Instruction Reasoning Affordance Segmentation (IRAS) task, enabling open-world affordance detection. Then, we propose the multi-stage learning strategy with a novel defined Referring Object Part Segmentation (ROPS) task to extract general segmentation knowledge to affordance detection. Finally, we accordingly proposed the 3D-AffordanceLLM (3D-ADLLM), firstly injecting LLM into 3D affordance perception, a framework designed for query reasoning affordance segmentation in 3D open scenarios. Experimental results demonstrate the effectiveness of 3D-ADLLM, we hope our work can shed new light on the direction of affordance detection in open-world scene in the future.
|
| 230 |
+
|
| 231 |
+
# 6 ACKNOWLEDGEMENT
|
| 232 |
+
|
| 233 |
+
We would like to thank the reviewers for their constructive comments. This study is supported by Shenzhen College Stability Support Plan (Grant No. GXWD20220817144428005), National Natural Science Foundation of China (Grant No. 62406092), National Natural Science Foundation of China (Grant No. U24B20175), Shenzhen Science and Technology Program (Grant No. KJZD20240903100017022), Research on Efficient Exploration and Self-Evolution of APP Agents & Embodied Intelligent Cerebellum Control Model and Collaborative Feedback Training Project (Grant No. TC20240403047).
|
| 234 |
+
|
| 235 |
+
# REFERENCES
|
| 236 |
+
|
| 237 |
+
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024.
|
| 238 |
+
Joya Chen, Difei Gao, Kevin Qinghong Lin, and Mike Zheng Shou. Affordance grounding from demonstration video to target image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6799-6808, 2023.
|
| 239 |
+
Sijin Chen, Xin Chen, Chi Zhang, Mingsheng Li, Gang Yu, Hao Fei, Hongyuan Zhu, Jiayuan Fan, and Tao Chen. L13da: Visual interactive instruction tuning for omni-3d understanding reasoning and planning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26428-26438, 2024.
|
| 240 |
+
Ali Cheraghian, Shafin Rahman, and Lars Petersson. Zero-shot learning of 3d point cloud objects. In 2019 16th International Conference on Machine Vision Applications (MVA), pp. 1-6. IEEE, 2019.
|
| 241 |
+
Ali Cheraghian, Shafin Rahman, Dylan Campbell, and Lars Petersson. Transductive zero-shot learning for 3d point cloud classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 923-933, 2020.
|
| 242 |
+
Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli Vanderbilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13142-13153, 2023.
|
| 243 |
+
Shengheng Deng, Xun Xu, Chaozheng Wu, Ke Chen, and Kui Jia. 3d affordancenet: A benchmark for visual object affordance understanding. In proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1778-1787, 2021.
|
| 244 |
+
Thanh-Toan Do, Anh Nguyen, and Ian Reid. Affordancenet: An end-to-end deep learning approach for object affordance detection. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 5882-5889. IEEE, 2018.
|
| 245 |
+
Yiran Geng, Boshi An, Haoran Geng, Yuanpei Chen, Yaodong Yang, and Hao Dong. Rlafford: End-to-end affordance learning for robotic manipulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 5880-5886. IEEE, 2023.
|
| 246 |
+
James Jerome Gibson. The Senses Considered as Perceptual Systems. Houghton Mifflin, Boston, USA, 1966.
|
| 247 |
+
Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. Advances in Neural Information Processing Systems, 36:20482-20494, 2023a.
|
| 248 |
+
Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. Advances in Neural Information Processing Systems, 36:20482-20494, 2023b.
|
| 249 |
+
|
| 250 |
+
Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, and Dacheng Tao. Affordance transfer learning for human-object interaction detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 495-504, 2021.
|
| 251 |
+
Haifeng Huang, Zehan Wang, Rongjie Huang, Luping Liu, Xize Cheng, Yang Zhao, Tao Jin, and Zhou Zhao. Chat-3d v2: Bridging 3d scene and large language models with object identifiers. arXiv preprint arXiv:2312.08168, 2023.
|
| 252 |
+
Juntao Jian, Xiuping Liu, Manyi Li, Ruizhen Hu, and Jian Liu. Affordpose: A large-scale dataset of hand-object interactions with affordance-driven hand pose. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14713-14724, 2023.
|
| 253 |
+
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
|
| 254 |
+
Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. Lisa: Reasoning segmentation via large language model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9579-9589, 2024.
|
| 255 |
+
Gen Li, Varun Jampani, Deqing Sun, and Laura Sevilla-Lara. Locate: Localize and transfer object parts for weakly supervised affordance grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10922-10931, 2023.
|
| 256 |
+
Yicong Li, Na Zhao, Junbin Xiao, Chun Feng, Xiang Wang, and Tat-seng Chua. Laso: Language-guided affordance segmentation on 3d object. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14251-14260, 2024.
|
| 257 |
+
Weiping Liu, Jia Sun, Wanyi Li, Ting Hu, and Peng Wang. Deep learning on point clouds and its application: A survey. Sensors, 19(19):4188, 2019.
|
| 258 |
+
Liangsheng Lu, Wei Zhai, Hongchen Luo, Yu Kang, and Yang Cao. Phrase-based affordance detection via cyclic bilateral interaction. IEEE Transactions on Artificial Intelligence, 4(5):1186-1198, 2022.
|
| 259 |
+
Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. Learning affordance grounding from exocentric images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2252-2261, 2022.
|
| 260 |
+
Eloise Matheson, Riccardo Minto, Emanuele GG Zampieri, Maurizio Faccio, and Giulio Rosati. Human-robot collaboration in manufacturing applications: A review. Robotics, 8(4):100, 2019.
|
| 261 |
+
Jinpeng Mi, Hongzhuo Liang, Nikolaos Katsakis, Song Tang, Qingdu Li, Changshui Zhang, and Jianwei Zhang. Intention-related natural language grounding via object affordance detection and intention semantic extraction. Frontiers in Neurorobotics, 14:26, 2020.
|
| 262 |
+
Björn Michele, Alexandre Boulch, Gilles Puy, Maxime Bucher, and Renaud Marlet. Generative zero-shot learning for semantic segmentation of 3d point clouds. In 2021 International Conference on 3D Vision (3DV), pp. 992-1002. IEEE, 2021.
|
| 263 |
+
Huaqing Min, Ronghua Luo, Jinhui Zhu, Sheng Bi, et al. Affordance research in developmental robotics: A survey. IEEE Transactions on Cognitive and Developmental Systems, 8(4):237-255, 2016.
|
| 264 |
+
Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, and Hao Su. PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 265 |
+
Kaichun Mo, Yuzhe Qin, Fanbo Xiang, Hao Su, and Leonidas Guibas. O2o-afford: Annotation-free large-scale object-object affordance learning. In Conference on robot learning, pp. 1666-1677. PMLR, 2022.
|
| 266 |
+
|
| 267 |
+
Bogdan Moldovan, Plinio Moreno, Martijn Van Otterlo, José Santos-Victor, and Luc De Raedt. Learning relational affordance models for robots in multi-object manipulation tasks. In 2012 IEEE International Conference on Robotics and Automation, pp. 4373-4378. IEEE, 2012.
|
| 268 |
+
Tushar Nagarajan, Christoph Feichtenhofer, and Kristen Grauman. Grounded human-object interaction hotspots from video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8688-8697, 2019.
|
| 269 |
+
Anh Nguyen, Dimitrios Kanoulas, Darwin G Caldwell, and Nikos G Tsagarakis. Detecting object affordances with convolutional neural networks. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2765-2770. IEEE, 2016.
|
| 270 |
+
Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le, and Anh Nguyen. Open-vocabulary affordance detection in 3d point clouds. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5692-5698. IEEE, 2023.
|
| 271 |
+
Abel Pacheco-Ortega and Walterio Mayol-Cuervas. One-shot learning for human affordance detection. In European Conference on Computer Vision, pp. 758-766. Springer, 2022.
|
| 272 |
+
Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017.
|
| 273 |
+
Zekun Qi, Runpei Dong, Shaochen Zhang, Haoran Geng, Chunrui Han, Zheng Ge, Li Yi, and Kaisheng Ma. *Shapellm: Universal 3d object understanding for embodied interaction.* arXiv preprint arXiv:2402.17766, 2024.
|
| 274 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021.
|
| 275 |
+
Debaditya Roy and Basura Fernando. Action anticipation using pairwise human-object interactions and transformers. IEEE Transactions on Image Processing, 30:8116-8129, 2021.
|
| 276 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS, 30, 2017.
|
| 277 |
+
Kashi Venkatesh Vishwanath, Diwaker Gupta, Amin Vahdat, and Ken Yocum. Modelnet: Towards a datacenter emulation environment. In 2009 IEEE Ninth International Conference on Peer-to-Peer Computing, pp. 81-82. IEEE, 2009.
|
| 278 |
+
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog), 38(5): 1-12, 2019.
|
| 279 |
+
Zehan Wang, Haifeng Huang, Yang Zhao, Ziang Zhang, and Zhou Zhao. Chat-3d: Data-efficiently tuning large language model for universal dialogue of 3d scenes. arXiv preprint arXiv:2308.08769, 2023.
|
| 280 |
+
Le Xue, Mingfei Gao, Chen Xing, Roberto Martin-Martin, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, and Silvio Savarese. Ulip: Learning a unified representation of language, images, and point clouds for 3d understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1179-1189, 2023.
|
| 281 |
+
Le Xue, Ning Yu, Shu Zhang, Artemis Panagopoulou, Junnan Li, Roberto Martin-Martin, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, et al. Ulip-2: Towards scalable multimodal pre-training for 3d understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 27091-27101, 2024.
|
| 282 |
+
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprint arXiv:2407.10671, 2024.
|
| 283 |
+
|
| 284 |
+
Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Jiebo Luo, and Zheng-Jun Zha. Grounding 3d object affordance from 2d interactions in images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10905-10915, 2023.
|
| 285 |
+
Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19313-19322, 2022.
|
| 286 |
+
Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16259-16268, 2021.
|
| 287 |
+
Junsheng Zhou, Jinsheng Wang, Baorui Ma, Yu-Shen Liu, Tiejun Huang, and Xinlong Wang. Uni3d: Exploring unified 3d representation at scale. arXiv preprint arXiv:2310.06773, 2023.
|
| 288 |
+
|
| 289 |
+
# A APPENDIX
|
| 290 |
+
|
| 291 |
+
# A.1 BASELINE MODELS DETAILS
|
| 292 |
+
|
| 293 |
+
We compare our method with the following recent methods for zero-shot learning in 3D point clouds: ZSLPC (Cheraghian et al., 2019), TZSLPC (Cheraghian et al., 2020), and 3DGenZ (Michele et al., 2021). For these baselines, we change their original text encoders with CLIP and retain the same settings in OpenAD (Nguyen et al., 2023). Furthermore, we incorporate two affordance detection works (IAGNet (Yang et al., 2023) and LASO (Li et al., 2024)) to provide a more comprehensive comparison of our approach. For IAGNet (Yang et al., 2023), an affordance detection method that utilizes paired image-point cloud data as input. To tailor IAGNet (Yang et al., 2023) to our requirements, we seamlessly integrate a language model in place of its original image backbone, while maintaining the rest of its architecture unchanged. ShapeLLM-7B (Qi et al., 2024) is a large-scale point cloud model that accepts point cloud and natural language inputs and possesses grounding capabilities. Consequently, we leverage its grounding abilities to perform zero-shot detection and calculate masks for comparison.
|
| 294 |
+
|
| 295 |
+
# A.2 COMPARISON RESULTS ON CLOSE SET
|
| 296 |
+
|
| 297 |
+
In this work, we primarily focus on enhancing affordance detection capabilities in open-world scene. However, our model still performs well in closed-set affordance detection tasks. As is shown in Fig. 7, Fig. 8, our 3D-ADLLM achieves optimal performance on nearly all metrics in both the overall-classes and over-all-instances settings.
|
| 298 |
+
|
| 299 |
+
Table 7: Main results of 3D-ADLLM compared with other methods on close-set detection over all class.
|
| 300 |
+
|
| 301 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Full-view</td><td colspan="3">Partial-view</td></tr><tr><td>mIoUc</td><td>Accc</td><td>mAccc</td><td>mIoUc</td><td>Accc</td><td>mAccc</td></tr><tr><td>Point Transformer (Zhao et al., 2021)</td><td>41.26</td><td>-</td><td>67.03</td><td>40.51</td><td>-</td><td>65.34</td></tr><tr><td>PointNet++ (Qi et al., 2017)</td><td>41.26</td><td>-</td><td>68.14</td><td>41.10</td><td>-</td><td>66.74</td></tr><tr><td>DGCNN (Wang et al., 2019)</td><td>42.09</td><td>-</td><td>61.47</td><td>41.93</td><td>-</td><td>63.12</td></tr><tr><td>OpenAD-PointNet++ (Nguyen et al., 2023)</td><td>40.17</td><td>38.61</td><td>66.83</td><td>40.44</td><td>38.92</td><td>65.84</td></tr><tr><td>OpenAD-DGCNN (Nguyen et al., 2023)</td><td>41.17</td><td>35.71</td><td>59.17</td><td>39.87</td><td>35.15</td><td>59.27</td></tr><tr><td>IAGNet (Yang et al., 2023)</td><td>40.04</td><td>35.12</td><td>53.05</td><td>41.24</td><td>34.68</td><td>52.58</td></tr><tr><td>LASO (Li et al., 2024)</td><td>41.31</td><td>35.02</td><td>53.96</td><td>40.11</td><td>35.21</td><td>52.68</td></tr><tr><td>3D-ADLLM</td><td>42.85</td><td>41.84</td><td>66.35</td><td>41.92</td><td>43.40</td><td>61.93</td></tr></table>
|
| 302 |
+
|
| 303 |
+
Table 8: The performance of close set affordance detection over all instances.
|
| 304 |
+
|
| 305 |
+
<table><tr><td></td><td>Method</td><td>mIoU1</td><td>mAcc1</td><td>mPrec1</td><td>mRec1</td><td>mAP50</td></tr><tr><td rowspan="4">Full-view</td><td>OpenAD-PointNet++</td><td>28.34</td><td>64.11</td><td>33.91</td><td>61.45</td><td>5.12</td></tr><tr><td>OpenAD-DGCNN</td><td>26.98</td><td>65.94</td><td>34.38</td><td>54.76</td><td>4.77</td></tr><tr><td>LASO</td><td>44.43</td><td>83.80</td><td>62.73</td><td>60.25</td><td>21.13</td></tr><tr><td>3D-ADLLM (ours)</td><td>46.29</td><td>81.24</td><td>57.90</td><td>64.27</td><td>46.38</td></tr><tr><td rowspan="4">Partial-view</td><td>OpenAD-PointNet++</td><td>29.50</td><td>63.26</td><td>35.21</td><td>61.34</td><td>6.77</td></tr><tr><td>OpenAD-DGCNN</td><td>17.07</td><td>67.11</td><td>27.96</td><td>30.15</td><td>1.87</td></tr><tr><td>LASO</td><td>43.35</td><td>82.31</td><td>60.27</td><td>59.57</td><td>20.85</td></tr><tr><td>3D-ADLLM (ours)</td><td>44.06</td><td>79.64</td><td>56.23</td><td>64.21</td><td>46.60</td></tr></table>
|
| 306 |
+
|
| 307 |
+
# A.3 DATA ANALYSIS
|
| 308 |
+
|
| 309 |
+
# A.3.1 THE DETAILED SETTINGS OF FULL VIEW AND PARTIAL VIEW.
|
| 310 |
+
|
| 311 |
+
Building the IRAS dataset based on the 3D AffordanceNet (OpenAD) Dataset (Nguyen et al., 2023). We retained the settings for both full-view point cloud and partial-view point cloud.
|
| 312 |
+
|
| 313 |
+
Full-view: Given an object as 3D point cloud without knowing the affordances supported by the object, the full-shape affordance estimation task aims to estimate the supported affordance type and predict the point-wise probabilistic score of affordance.
|
| 314 |
+
|
| 315 |
+
Partial-view: in real-world application scenarios, we can only expect partial view of 3D shapes, represented as partial point cloud. Therefore, another important task we are concerned with is to estimate the affordance from partial point cloud.
|
| 316 |
+
|
| 317 |
+
# A.3.2 THE VISUALIZATION OF DATA ANALYSIS.
|
| 318 |
+
|
| 319 |
+
# A.4 TRAINING DETAILS
|
| 320 |
+
|
| 321 |
+
We constructed our IROS dataset based on the PartNet dataset and aim to acquire general segmentation knowledge through the pre-training phase. To balance training costs with model performance, we selectively sampled a subset of the data based on categories to obtain general segmentation knowledge for objects.
|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
Figure 5: The analysis of IRAS task.
|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
Figure 6: The analysis of extensive test dataset in sec. 4.2.2.
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
Figure 7: The analysis of ROPS task.
|
3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:366e53ec91f4d4b16d92161df5dfbc8e70d4ac457db9e7169c086dd045b1298f
|
| 3 |
+
size 744640
|
3daffordancellmharnessinglargelanguagemodelsforopenvocabularyaffordancedetectionin3dworlds/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dca35ac1ae3b0f872f8e25124e461f3c1e54533302c8d40227fbf7460ab4880f
|
| 3 |
+
size 452527
|
3dgsdragdragginggaussiansforintuitivepointbased3dediting/3772ab48-3125-4695-a4cf-9b4e8a378d76_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1027996ecfa2daf36c4ad348b9392b4be8d06428faef2311930cb62f2cd6dced
|
| 3 |
+
size 105936
|
3dgsdragdragginggaussiansforintuitivepointbased3dediting/3772ab48-3125-4695-a4cf-9b4e8a378d76_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62b1cf031bb2a338cffb3083edd9d20a06e283945236d138e3bb77ab45d09507
|
| 3 |
+
size 129479
|
3dgsdragdragginggaussiansforintuitivepointbased3dediting/3772ab48-3125-4695-a4cf-9b4e8a378d76_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:52d07174485c0e987c2a6a37f45636e4a79fb3a2295b371f07df449e141971ab
|
| 3 |
+
size 11428612
|
3dgsdragdragginggaussiansforintuitivepointbased3dediting/full.md
ADDED
|
@@ -0,0 +1,484 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3DGS-DRAG: DRAGGING GAUSSIANS FOR INTUITIVE POINT-BASED 3D EDITING
|
| 2 |
+
|
| 3 |
+
Jiahua Dong Yu-Xiong Wang
|
| 4 |
+
|
| 5 |
+
University of Illinois Urbana-Champaign
|
| 6 |
+
|
| 7 |
+
{jiahuad2, yxw}@illinois.edu
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
+

|
| 12 |
+
User Edit
|
| 13 |
+
|
| 14 |
+

|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
Rendered Drag Result
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
User Edit
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Rendered Drag Result
|
| 28 |
+
Figure 1: Our proposed 3DGS-Drag framework enables high-quality 3D drag editing: Users only need to input 3D handle points (circle) and target points (triangle). Our method precisely moves the handle points to match the target points while preserving the overall content and details.
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
|
| 32 |
+

|
| 33 |
+
User Edit
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
Rendered Drag Result
|
| 39 |
+
|
| 40 |
+
# ABSTRACT
|
| 41 |
+
|
| 42 |
+
The transformative potential of 3D content creation has been progressively unlocked through advancements in generative models. Recently, intuitive drag editing with geometric changes has attracted significant attention in 2D editing yet remains challenging for 3D scenes. In this paper, we introduce 3DGS-Drag - a point-based 3D editing framework that provides efficient, intuitive drag manipulation of real 3D scenes. Our approach bridges the gap between deformation-based and 2D-editing-based 3D editing methods, addressing their limitations to geometry-related content editing. We leverage two key innovations: deformation guidance utilizing 3D Gaussian Splatting for consistent geometric modifications and diffusion guidance for content correction and visual quality enhancement. A progressive editing strategy further supports aggressive 3D drag edits. Our method enables a wide range of edits, including motion change, shape adjustment, inpainting, and content extension. Experimental results demonstrate the effectiveness of 3DGS-Drag in various scenes, achieving state-of-the-art performance in geometry-related 3D content editing. Notably, the editing is efficient, taking 10 to 20 minutes on a single RTX 4090 GPU. Our code is available at https://github.com/Dongjiahua/3DGS-Drag.
|
| 43 |
+
|
| 44 |
+
# 1 INTRODUCTION
|
| 45 |
+
|
| 46 |
+
Recent years have witnessed remarkable advancements in 3D scene representation techniques, such as Neural Radiance Fields (NeRF) (Mildenhall et al., 2021) and 3D Gaussian Splitting (3DGS) (Kerbl et al., 2023). These methods have revolutionized the way we capture, represent, and synthesize 3D content, offering unprecedented levels of detail and realism. Inspired by their success
|
| 47 |
+
|
| 48 |
+
and the blooming development of 2D generative models (Rombach et al., 2022), recent works in 3D generation (Tang et al., 2024; Poole et al., 2023) can now generate 3D content with high quality and efficiency. However, precise and intuitive editing of 3D scenes remains a challenge, particularly in contrast to the sophisticated editing capabilities available for 2D images. While 2D editing methods like DragGAN (Pan et al., 2023) offer point-based manipulation, extending such functionalities to 3D scenes presents substantial technical hurdles.
|
| 49 |
+
|
| 50 |
+
Specifically, the underexplored capability behind is to achieve intuitive content editing with geometric change. The recent progress in 3D editing can be roughly grouped into two classes: deformation-based and 2D-editing-based. The deformation-based methods (Huang et al., 2024; Xie et al., 2024) primarily focus on motion editing, assuming strong geometry prior (Xie et al., 2024) or relying on video to learn motion pattern (Huang et al., 2024). Besides the requirement for sufficient prior information, they naturally cannot intuitively edit unseen content. For 2D-editing-based methods, recent works (Haque et al., 2023; Dong & Wang, 2023; Chen et al., 2024a) have attempted to distill the editing ability from 2D diffusion models (Brooks et al., 2023) by editing the dataset of different view images with the 2D diffusion model. These approaches remain limited to appearance modifications and minor geometric adjustments, since larger 2D geometric edits fail to converge to 3D. The text guidance they used also sometimes causes incorrect edits, because the diffusion model fails to understand the text prompt. Bridging the geometric editing ability from deformation and the content editing ability from 2D-editing models has not yet been well studied.
|
| 51 |
+
|
| 52 |
+
Motivated by these observations, we introduce 3DGS-Drag - an intuitive 3D drag editing method for real scenes. Extending the flexible editing format of DragGAN (Pan et al., 2023), we take 3D handle points and target points as inputs, aiming for geometry-related 3D content editing. Our key insight is to fully leverage two sources of guidance for 3D content editing, which explicitly regularize the edits from different views to be consistent and optimized toward the target 3D points. The first guidance is deformation guidance. Benefiting from the explicit representation of 3D Gaussian Splitting (Kerbl et al., 2023), we propose a simple but effective deformation strategy without the requirement for prior information. With such a strategy, we directly deform the 3D Gaussians and leverage them as guidance for different views. Moreover, the deformation of the Gaussians facilitates optimization around the deformed space, simplifying the generation of detailed geometry. The second one is diffusion guidance. Notably, since there is no prior information in our setting, the deformed Gaussians always have incorrect content and artifacts. We use the diffusion model to correct the content and improve the visual quality. This guidance is grounded in our observation that a fine-tuned diffusion model serves as a view-consistent editor for a 3D scene. Consequently, it achieves better consistency given previous deformation guidance.
|
| 53 |
+
|
| 54 |
+
To support more challenging 3D drag edits, we further propose a progressive editing strategy. Specifically, we divide the drag operation into several intervals and proceed with editing step by step. The continuity of editing is guaranteed by a 3D relocation strategy. In the end, our experimental results demonstrate the effectiveness of our 3DGS-Drag in various scenes and editing. We resolve the challenges of a 3D drag operation and indicate an enhancement in multi-view consistency compared to prior techniques.
|
| 55 |
+
|
| 56 |
+
Our major contributions can be summarized as follows: 1) We present a novel framework for editing 3D scenes, featuring a point-based drag editing approach. 2) We propose an effective method to bridge 3D deformation guidance and diffusion guidance for conducting geometry-related 3D content editing. 3) We further propose a progressive drag editing method to improve editing results. 4) Extensive evaluations show our method achieves state-of-the-art results in such setting, which implicitly includes motion change, shape adjustment, inpainting, and content extension.
|
| 57 |
+
|
| 58 |
+
# 2 RELATED WORK
|
| 59 |
+
|
| 60 |
+
# 2.1 2D IMAGE EDITING
|
| 61 |
+
|
| 62 |
+
Initially, the image generation methods rise from generative adversarial networks (GAN) (Goodfellow et al., 2014; Karras et al., 2019). Based on its latent representation, early works tried to modify the latent to adjust certain attributes or contents of the image (Abdal et al., 2021; Endo, 2022; Harkonen et al., 2020; Leimkuhler & Drettakis, 2021). However, due to the limited capability of the GAN model and the implicit representation of the latent code, it is hard to achieve high-
|
| 63 |
+
|
| 64 |
+
quality and detailed edits. Recently, diffusion models have shown great potential for text-to-image tasks (Rombach et al., 2022). Its feature map representation and the large-scale data empower lots of image editing methods (Kawar et al., 2023; Ramesh et al., 2022; Meng et al., 2022; Brooks et al., 2023). SDEdit (Meng et al., 2022) performs a nosing and denoising procedure to keep the structural information and change the details. Instruct-Pix2Pix (Brooks et al., 2023) builds an instruction editing dataset and train the diffusion model to edit the image following the instruction. Compared with previous methods, Instruct-Pix2Pix shows better editing consistency.
|
| 65 |
+
|
| 66 |
+
Although text-based image editing can generate high-fidelity results, it cannot reach fine-grained editing. DragGAN (Pan et al., 2023) proposed a point-based interactive editing method. The user inputs several handle points and target points; then, the latent will be optimized to move the handle points to the target. To improve the generality, DragDiffusion (Shi et al., 2024) transfers this technique to diffusion models (Rombach et al., 2022). Later, SDE-Drag (Nie et al., 2024) and RegionDrag (Lu et al., 2024) further improve the performance. An inverse-forward process is necessary for these diffusion-based methods, making this operation time-consuming. In addition, there is no 3D consistency guaranteed in such 2D models, thus not available to be directly applied to 3D.
|
| 67 |
+
|
| 68 |
+
In this paper, we adopt the 2D diffusion model to perform 3D-consistent view correction. Our editing not only generates intuitive new content but also removes potential 3D artifacts.
|
| 69 |
+
|
| 70 |
+
# 2.2 2D-EDITING-BASED 3D EDITING
|
| 71 |
+
|
| 72 |
+
Previous to 3DGS (Kerbl et al., 2023), Neural Radiance Fields (NeRF) (Mildenhall et al., 2021) is used as a common connector from 3D representation to 2D models. Early works on NeRF can only deal with color and shape adjustment (Chiang et al., 2022; Huang et al., 2021; 2022; Wu et al., 2023; Bao et al., 2023; Zhang et al., 2022; Jambon et al., 2023). SNeRF (Nguyen-Phuoc et al., 2022) proposes to use an image stylization model, achieving high-quality stylization results. Later, NeRF-Art (Wang et al., 2023) uses CLIP (Radford et al., 2021) to distill the knowledge to NeRF. However, since the CLIP is not a generative model and is highly semantic-based, such an approach cannot get results with high fidelity. Instruct-NeRF2NeRF (Haque et al., 2023) proposes to use the Instruct-Pix2Pix model to Iteratively edit the dataset. They can edit various scenes with a broad range of instructions. ViCA-NeRF (Dong & Wang, 2023) proposes to directly edit the dataset without fine-tuning NeRF. Specifically, they make multi-view consistent edits by utilizing the depth of information. DreamEditor (Zhuang et al., 2023) proposes to use a fine-tuned Dreambooth (Ruiz et al., 2023) to help with editing. ConsistentDreamer (Chen et al., 2024a) further fine-tune a ControlNet to give more detailed edits. However, all these methods are limited by the 3D consistency from different views, thus only available to make subtle geometric changes. PDS (Koo et al., 2024) propose a new distillation loss to help improve the result but suffer from degeneration of rendering quality and the ability for sufficient geometric editing.
|
| 73 |
+
|
| 74 |
+
Inspired by the efficiency of 3DGS, recent approaches (Fang et al., 2024; Chen et al., 2024b; Chen & Wang, 2024) propose to migrate the success of NeRF editing to 3G Gaussians. However, they are mainly following the idea of Instrcut-NeRF2NeRF (Haque et al., 2023) by changing the 3D representation, thus having similar limitations. Some approaches (Xie et al., 2023; Shen et al., 2024; Yoo et al., 2024; Dong et al., 2024) have attempted to extend the drag operation to 3D; however, they are limited to handling single objects. In contrast, our approach leverages the explicit representation of 3DGS and focuses on real scenes.
|
| 75 |
+
|
| 76 |
+
# 2.3 DEFORMATION-BASED 3D EDITING
|
| 77 |
+
|
| 78 |
+
3D deformation is a challenging task since the target is to generate unseen motions. Traditional methods (Sorkine-Hornung & Alexa, 2007; Sorkine, 2005) apply certain Laplacian coordinates for mesh deformation. Recently, people have focused on deformation in 3D representations like NeRF and 3DGS. Specifically, Xu & Harada (2022) proposes to build 3D cages as the motion prior to guide deformation. Yuan et al. (2022) reconstruct the mesh from NeRF and deform the mesh instead. NeuralEditor (Chen et al., 2023) requires dense point cloud deformation as input and applies pointlike NeRF structure for deformation. All these methods need strong geometry prior to editing, which is hard and inconvenient in practice. PhysGaussian (Xie et al., 2024) considers Gaussian ellipsoids as a Continuum and integrates physics. SC-GS (Huang et al., 2024) samples control points as a structure-representing graph to guide motion. However, the physics simulation and continuum
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
Figure 2: Overview of 3DGS-Drag : Given a trained 3D Gaussian splatting model and the dataset, we use the multi-step editing scheduler to calculate the intermediate handle points $p_h'(i)$ and target points $p_t'(i)$ for step $i$ . In each step, we first deform the 3D Gaussians using handle points and target points. Then, we render the image for each view and correct it with a diffusion model. The final corrected images will be used to train 3D Gaussians to improve quality. The diffusion model is fine-tuned with LoRA for more consistent edits.
|
| 82 |
+
|
| 83 |
+
assumption makes PhysGaussian less flexible and limited to continuous scenes. SC-GS's control points are an approximation of dense points, thus also relying on sufficient capture of the object's geometry. It also takes dynamic scenes as input to build prior motion knowledge.
|
| 84 |
+
|
| 85 |
+
The hard prior knowledge requirements or strict assumptions make these methods not suitable for large real scenes, where there is often only part-view information and complex layouts. In addition, they do not have the ability to create new parts. Rather than building a better deformation method, we propose a simpler deformation strategy for 3DGS to give rough deformations. Since the 2D generative models (Rombach et al., 2022) already have the sense of normal motions and contents, we borrow such knowledge to provide more flexible 3D edits
|
| 86 |
+
|
| 87 |
+
# 3 METHOD
|
| 88 |
+
|
| 89 |
+
# 3.1 PRELIMINARY
|
| 90 |
+
|
| 91 |
+
3D Gaussian Splatting. 3D Gaussian splatting (Kerbl et al., 2023) uses a collection of 3D Gaussians to represent 3D information, demonstrating effectiveness in object and scene reconstruction tasks. Each Gaussian is characterized by a center $\mu \in \mathbb{R}^3$ , a scaling factor $s \in \mathbb{R}^3$ , and a rotation quaternion $q \in \mathbb{R}^4$ . The model also incorporates an opacity value $\alpha \in \mathbb{R}$ and a color feature $c \in \mathbb{R}^d$ for volumetric rendering, where $d$ indicates the degrees of freedom. The full set of parameters is denoted as $\Gamma$ , where $\Gamma^i = \{\mu^i, s^i, q^i, \alpha^i, c^i\}$ represents the parameters for the $i$ -th Gaussian.
|
| 92 |
+
|
| 93 |
+
# 3.2 FRAMEWORK OVERVIEW
|
| 94 |
+
|
| 95 |
+
Our framework is illustrated in Figure 2. It takes a pretrained 3D Gaussian splattering model and several handle points along with their corresponding target points as input. Specifically, the handle points are denoted as $p_h^{n \times 3}$ , and the target points are denoted as $p_t^{n \times 3}$ , where $n$ is the number of handle points. We aim to move the handle part to the target position while preserving similar content. Depending on the input points, this process may entail appearance and geometric changes, allowing more challenging edits with user-friendly inputs.
|
| 96 |
+
|
| 97 |
+
Different from the idea of 2D drag editing techniques (Pan et al., 2023; Shi et al., 2024; Lu et al., 2024), which either optimize or operate the inverse feature of a 2D image, we use deformation-based geometric guidance and diffusion-based appearance guidance for 3D editing. For a single step of drag operation, we first deform the 3D Gaussians with the provided handle and target points (Sec. 3.3). Such deformation is conducted in a copy-and-paste manner to allow more editing flexibility. Due to the sparsity and long-distance challenge of the drag operation, the rendering result from the deformed Gaussians have poor visual quality and incorrect content. Thus, we propose to use diffusion-guided image correction on the rendered images (Sec. 3.4), which efficiently corrects the contents and removes artifacts. To resolve editing with more aggressive changes, we propose
|
| 98 |
+
|
| 99 |
+
a multi-step editing scheduler to progressively edit the scene (Sec. 3.5). As the whole process is divided into intervals, the user can stop at any intermediate step when achieving a satisfactory outcome.
|
| 100 |
+
|
| 101 |
+
# 3.3 DEFORMATION GUIDANCE FOR GEOMETRIC MODIFICATION
|
| 102 |
+
|
| 103 |
+
As we aim to deform the 3D scenes to provide geometry guidance, we leverage 3DGS to benefit from its explicit representations and efficiency. The deformation involves two challenges in our task: (1) How to approximately deform the 3D Gaussians given sparse handle points and long-distance drag target, without structural modification to standard 3DGS; (2) How to avoid degeneration to direct deformation, allowing more flexibility to edits like moving, extending, and others. The proposed solution is described as follows. As a result, we achieve reliable deformation to 3DGS given limited point information.
|
| 104 |
+
|
| 105 |
+
Drag Deformation. The explicit representation of 3DGS enables efficient 3D deformation and adjustment. However, the real deformation function cannot be precisely computed, given only handle points and target points. Thus, we approximate it to give a rough geometry guidance. For the $i$ th handle point $p_h^i$ , we assign the Gaussians $P_h^i$ within a certain distance $\tau$ in 3D to this point. These Gaussians are considered to be deformed and guided by this handle point. The union of $\{P_h^i | i = 1,2,\dots,n\}$ is denoted as $P_{h} = \bigcup_{i = 1}^{n}P_{h}^{i}$ .
|
| 106 |
+
|
| 107 |
+
Firstly, we calculate the translation and rotation for each handle point. For the translation, we simply calculate it as: $\Delta p_h^i = p_t^i -p_h^i$ . For the rotation, it is not to further change the position of handle points but to represent the potential orientation change. Since the 3D Gaussians are also parameterized by rotation $q$ , such a parameter is crucial to guide the Gaussian deformation. However, our handle points are just coordinates without information on the orientation. To approximate the rotation, we calculate its relative rotation with its top- $K$ ( $K = 2$ ) nearest handle points $\{p_h^k | k \in N_h^i\}$ where $N_{h}^{i}$ are the indices of top- $K$ nearest handle points. Linear weight is used due to the sparsity of the points. Specifically, the weight is calculated as:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
w _ {h} ^ {i k} = 1 - \frac {\left\| p _ {h} ^ {i} - p _ {h} ^ {k} \right\| _ {2} ^ {2}}{\sum_ {j \in N _ {h} ^ {i}} \left\| p _ {h} ^ {i} - p _ {h} ^ {j} \right\| _ {2} ^ {2}}. \tag {1}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Then, we calculate the relative rotation quaternion $\Delta q_h^{ik}$ between $p_h^i$ and $p_h^k$ (Details in Sec. D), and the quaternion $\Delta q_h^i$ of pair $(p_h^i,p_t^i)$ is calculated as $\Delta q_h^i = \sum_{k\in N_i}w_h^{ik}\Delta q_h^{ik}$ .
|
| 114 |
+
|
| 115 |
+
After calculating each handle point's translation and rotation quaternion, we can interpolate the entire 3D Gaussians' deformation. Specifically for each Gaussian $\Gamma^i\in P_h$ , the deformed Gaussian is interpolated from the transformation of top- $K$ ( $K = 2$ ) nearby handle points $\{p_h^j |j\in N_i\}$ where $N_{i}$ are the indices of top- $K$ nearest handle points. The deformed center $\mu_d^i$ and rotation quaternion $q_{d}^{i}$ are:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
w ^ {i k} = 1 - \frac {\left\| \mu^ {i} - p _ {h} ^ {k} \right\| _ {2} ^ {2}}{\sum_ {j \in N _ {i}} \left\| \mu^ {i} - p _ {h} ^ {j} \right\| _ {2} ^ {2}}, \tag {2}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mu_ {d} ^ {i} = \mu^ {i} + \sum_ {k \in N _ {i}} w ^ {i k} \Delta p _ {h} ^ {k}, \tag {3}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
q _ {d} ^ {i} = \sum_ {k \in N _ {i}} \left(w ^ {i k} \Delta q _ {h} ^ {k}\right) \otimes q ^ {i}, \tag {4}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $\mu^i$ and $q^i$ are the original center and rotation quaternion. $\otimes$ is the quaternion production. When there is only one handle point, no quaternion change will be applied. We do not directly update the old Gaussians to the deformed Gaussians since this limits deformation and is not suitable for tasks like "make his sleeves longer." Inspired by SDE-Drag (Nie et al., 2024), we use a copy-and-paste manner to place the deformed Gaussians and keep the old ones. To offer more flexibility for optimization, we adjust the opacity of the original Gaussians $P_h$ to a smaller value, allowing the 2D updates to determine whether to keep or remove the Gaussians.
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
Figure 3: Multi-view consistent 2D edits: With the deformed rendering as input, the fine-tuned diffusion model can perform multi-view consistent edits, and the artifacts and incorrect parts (shoes) are fixed.
|
| 133 |
+
|
| 134 |
+
Local Editing Mask. Since drag operations mainly focus on a part of the entire scene, local editing is necessary to maintain the background information. Following Gaussian Editor (Chen et al., 2024b), we assign a mask $M$ to the Gaussians of $P_{h}$ , which are considered changeable. Different from Gaussian Editor, our work builds both 3D and 2D local editing masks to work with more complex scenes and geometry edits. For the 3D mask, we inherit the mask from the original Gaussians when deforming new Gaussians or during the densification procedure. These Gaussians outside of the mask are not changed in the optimization. For the 2D mask, we render the mask for each view and round it to (0, 1) with a threshold, resulting in masks $\{m^v\}$ where $v$ denotes the $v$ th view. Note that the mask rendering is after the deformation, so the original region and the target region will both be covered. The mask is further dilated to change the context of the nearby area.
|
| 135 |
+
|
| 136 |
+
# 3.4 DIFFUSION GUIDANCE FOR APPEARANCE CORRECTION
|
| 137 |
+
|
| 138 |
+
The direct deformation of Gaussians often creates notable artifacts and cannot generate correct semantic content. Inspired by recent successes in 3D editing (Haque et al., 2023), we update the dataset to edit 3D scenes. However, integrating the concept of 2D dragging into a 3D context is non-trivial. Previous 2D drag methods often necessitate a time-consuming forward and backward process (Shi et al., 2024; Nie et al., 2024). Moreover, during the training process, the inconsistent 2D edits from different views make the final result deviate from expectations and full of artifacts. To address these issues, we propose to use inverse-free 2D image editing that achieves stronger 3D consistency, efficiency, and quality, relying on consistent renderings from the deformed 3D content. As shown in Figure 3, our method generates multi-view consistent 2D edits. In detail, given the rendered image from the deformed 3D Gaussians, we introduce an Image2Image view correction to obtain corrected 2D edits. To overcome the challenge of dataset editing with geometry change, we update the dataset in an annealed dataset editing way.
|
| 139 |
+
|
| 140 |
+
Image2Image View Correction. Although the deformed Gaussian gives better 3D consistency, it cannot benefit from latent-based drag methods (Pan et al., 2023). This is because the 3D consistency is ensured with newly rendered images. In contrast, latent-based methods heavily rely on operating the feature map of the same image. Inspired by the common approach for image editing (Meng et al., 2022), we add noise and then denoise it through the Dreambooth (Ruiz et al., 2023) model. By changing the image to a sketch level and denoising it, the diffusion model can partially understand and complete the deformed part.
|
| 141 |
+
|
| 142 |
+
To mitigate the influence of randomness from the diffusion model, the Dreambooth model is fine-tuned on each scene with LoRA (Hu et al., 2022). We find that after fine-tuning, the diffusion model becomes a multi-view consistent editor. The experiment results in Figure 3 show that the diffusion model can successfully understand the deformed image and generate an image with the correct content even without the inverse process. However, such corrections still cannot fully converge in one update, requiring a better dataset editing strategy as follows.
|
| 143 |
+
|
| 144 |
+
Annealed Dataset Editing. Iterative dataset editing has been a common approach for 3D appearance editing (Haque et al., 2023). The idea is to progressively change the appearance of 3D and use the rendering to guide consistent 2D editing further. However, such a strategy does not work well with geometry-related edits because it is harder to converge given inconsistent geometry. In addition,
|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
Figure 4: Intermediate dragging steps and tracked mask: Our method conducts progressive editing toward the target point. The dragged Gaussians are tracked to achieve aggressive edits.
|
| 148 |
+
|
| 149 |
+
long-term iterative updates also accumulate serious blurriness (Haque et al., 2023). To address this, we propose to update the dataset with limited $A$ times, and each time anneals the strength (Meng et al., 2022) for Image2Image view correction. The annealing function is as follows:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
S (a) = S _ {\text {i n i t}} - \frac {a - 1}{A} \left(S _ {\text {i n i t}} - S _ {\text {f i n a l}}\right), \quad a = 1, 2, 3, \dots , A, \tag {5}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
where $S_{\mathrm{init}}$ and $S_{\mathrm{final}}$ are the initial strength and final strength respectively. $S(a)$ denotes the strength for the $a$ th updates. Note that lower strength means that diffusion starts from later timesteps, resulting in finer detail correction. Our strategy performs editing in a coarse-to-fine manner. Each time, all the views are updated to prevent accumulated errors.
|
| 156 |
+
|
| 157 |
+
Loss Function. With the rendered image $I_r^v$ from 3D Gaussians, the corresponding edited image $I_e^v$ as the editing area's groundtruth, the original image $I_o^v$ as background groundtruth and mask $m^v$ for view $v$ , our loss function for training 3D Gaussians is formulated as:
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\mathcal {L} = \sum_ {v = 1} ^ {V} \left(\lambda_ {1} \mathcal {L} _ {1} \left(I _ {r} ^ {v}, I _ {o} ^ {v}\right) + \lambda_ {\text {s s i m}} \mathcal {L} _ {\text {s s i m}} \left(I _ {r} ^ {v}, I _ {o} ^ {v}\right)\right) \odot (1 - m ^ {v}) + \lambda_ {\text {l p i p s}} \mathcal {L} _ {\text {l p i p s}} \left(I _ {r} ^ {v}, I _ {e} ^ {v}\right) \odot m ^ {v}), \tag {6}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
where $\mathcal{L}_1$ and $\mathcal{L}_{\mathrm{ssim}}$ are to ensure local editing. $\mathcal{L}_{\mathrm{lpips}}$ is the LPIPS (Zhang et al., 2018) loss function to correct the editing area. $\lambda_1$ , $\lambda_{\mathrm{ssim}}$ and $\lambda_{\mathrm{lpips}}$ are the weighting coefficient for each loss.
|
| 164 |
+
|
| 165 |
+
# 3.5 FROM ONE-STEP TO MULTI-STEP DRAG EDITING
|
| 166 |
+
|
| 167 |
+
The previous sections introduce the one-step drag editing using our method. As the long-distance drag operation often requires more than one step to avoid corruption, we propose a multi-step editing scheduler to solve such problems. Specifically, we split the drag operation into $T$ intervals and set the progressive target points $\{p_t'(u)|u = 1,2,\dots,T\}$ . In each interval, we perform drag toward the corresponding target points:
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
p _ {t} ^ {\prime} (u) = p _ {h} + \frac {u}{T} \left(p _ {t} - p _ {h}\right), \tag {7}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
However, the actual handle point position usually changes when training 3D Gaussians. We propose relocating the handle points at the end of every interval to make the next interval's deformation more precise. In addition, we further conduct history-aware diffusion fine-tuning to improve the ability for more aggressive editing.
|
| 174 |
+
|
| 175 |
+
Handle Point Relocation. The handle point relocation is performed after each interval's training process. To keep track of the handle points, we use the Gaussians associated with each handle point. Specifically for handle point $p_h^i$ , we update it with the averaged position change of Gaussians $P_h^i$ . As shown in Figure 4, the dragged part can be successfully relocated. Note that the assigned Gaussians $P_h^i$ are updated to newly deformed Gaussians during deformation and inherited from parents during the densification process of training. The local mask is updated as the union with the mask.
|
| 176 |
+
|
| 177 |
+
History-Aware Diffusion Fine-Tuning. For long-distance drag operations, the edited 2D images can shift out of the diffusion model's domain since it is fine-tuned on the original images, resulting in degeneration back to the original images. We build an image buffer to fine-tune the diffusion model. The diffusion model will be fine-tuned with the image buffer every interval. Initially, the buffer only contains original images, and the newly edited result will be added to the buffer during intervals.
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
Figure 5: Qualitative results in various scenes: Our method can handle complex scenes and generate highly detailed results. With a simple drag input, 3DGS-Drag can identify the 3D context and perform edits like moving objects, inpainting the background, adjusting appearance, modifying object shape, and adjusting motion. The orange bounding boxes highlight the modified regions.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
|
| 184 |
+
# 4 EXPERIMENT
|
| 185 |
+
|
| 186 |
+
# 4.1 IMPLEMENTATION DETAILS
|
| 187 |
+
|
| 188 |
+
User Input. Our user input is one or multiple handle points and corresponding target points. The input points are in 3D space. The user can specify the sphere radius of that handle point to adjust the editing scale. We automatically perform local editing by applying the mask rendered from assigned Gaussians. The mask is dilated to change the necessary context.
|
| 189 |
+
|
| 190 |
+
Drag Editing. The pretrained 3D Gaussians are trained with original 3D Gaussian Splatting (Kerbl et al., 2023). During editing, 50 views are selected to enable efficient editing by default. Specifically, we choose the views with a larger visible area on the handle points' Gaussians, which is determined by the local editing mask on each view. We fine-tune the Dreambooth model (Ruiz et al., 2023) with LoRA (Hu et al., 2022). Initially, it is fine-tuned on the selected views. We use batch size 4 and train for 200 iterations. After each dragging step, we continue fine-tuning the diffusion model for 50 iterations with the updated image buffer in each interval. Each time, the newly edited image will be enqueued. The loss weight of $\lambda_{1}$ , $\lambda_{\mathrm{ssim}}$ and $\lambda_{\mathrm{lpips}}$ are set to 8, 2, 1 respectively. Note that the $\lambda_{1}$ and $\lambda_{\mathrm{ssim}}$ are 10 times bigger than normal to ensure the background.
|
| 191 |
+
|
| 192 |
+
Datasets. our experiments include edits on eight scenes, using the published datasets from Instruct-NeRF2NeRF Haque et al. (2023), PDS Koo et al. (2024), Mip-NeRF360 Barron et al. (2022), and Tank and Temple Knapitsch et al. (2017).
|
| 193 |
+
|
| 194 |
+
# 4.2 QUALITATIVE EVALUATION
|
| 195 |
+
|
| 196 |
+
Editing Results in Various Scenes. We show editing results from different views in Figure 1 and Figure 5. Since the handle and target points are in 3D, we plot them in 2D for illustration. Each drag is represented by a red arrow where the start is the handle point, and the end is the target point. In the standing-person scene in Figure 1, when we raise one hand, this is very challenging since the arm is only observed partially, and the part under the arm is unknown. Our method also shows the ability to generate new poses and fix the texture on the pants below the arm. We are also able to change the leg motion and extend the sleeves. When dealing with more complex scenes, such as the bamboo scene in Figure 1, 3DGS-Drag can understand the texture of the plant and extend it to be taller or wider. We can also easily change part of the background, like the wall. When the drag operation is to move the football, we can separate this object from the background and inpaint the texture at the original position instead of an empty region. In short, our drag operation can understand different operations
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
|
| 216 |
+

|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
User Edit
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
Deformation
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
Instruct-NeRF2NeRF (1.5h)
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
PDS (10h)
|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
SDE-Drag (1h)
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
3DGS-Drag (Ours) (10-20 mins)
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Figure 6: Baseline comparisons: Compared with baselines, 3DGS-Drag achieves high-quality, fine-grained editing by correctly modifying different parts and in terms of efficiency. Specifically, Instruct-NeRF2NeRF (Haque et al., 2023) and PDS (Koo et al., 2024) cannot correctly edit. Deformation results in incomplete edits, and SDE-Drag (Nie et al., 2024) sometimes fails to make changes.
|
| 242 |
+
User Edit
|
| 243 |
+
Figure 7: Ablation study on the local mask and drag steps: Without the local mask, the scene will be blurred, resulting in failed edits. Using very few steps makes it hard to achieve aggressive edits. More steps will slightly improve the performance.
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
W/O Local Editing
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
1-Step Drag
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
5-Step Drag
|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
20-Step Drag
|
| 256 |
+
|
| 257 |
+
in front-view or 360-degree scenes, such as moving objects and extending objects, demonstrating the ability to identify the 3D context.
|
| 258 |
+
|
| 259 |
+
Baseline Comparison. Since there is no directly comparable work on intuitive 3D drag operation in real scenes, we extend and re-purpose representative baselines. The results are shown in Figure 6. Specifically, the comparison with baselines is listed as follows:
|
| 260 |
+
|
| 261 |
+
- Instruct-NeRF2NeRF (Haque et al., 2023): We manually create text descriptions for drag operations in this baseline. Then, we use Instruct-NeRF2NeRF to edit the scene. The model fails to give edits for the 'person' scene. For the more complex 'garden' scene, Instruct-NeRF2NeRF just blurs the rendering. This demonstrates its insufficient ability to perform geometric modification.
|
| 262 |
+
- Deformation: We use our deformation to represent the previous deformation-based approaches since we have different input settings. Notably, the geometry is moved, which results in a lot of incorrect content and artifacts.
|
| 263 |
+
- PDS: PDS (Koo et al., 2024) claims to be able to change the geometry, but this method struggles in all three editing scenarios. In addition, PDS tends to create noisy and blurred editing results compared with others.
|
| 264 |
+
- SDE-Drag: One alternative solution is to simply use the 2D drag method on each view. Here we choose SDE-Drag (Nie et al., 2024) in comparison. However, such a strategy cannot reach consistent edits, resulting in flawed results or failure cases in editing.
|
| 265 |
+
|
| 266 |
+
Compared with these baselines, our methods achieve significantly better editing results, with better details and correct content. Remarkably, for the "lower his hairline" text prompt, both Instruct-NeRF2NeRF and PDS misunderstand the text and make the hairline higher, which further emphasizes the importance of intuitive 3D editing.
|
| 267 |
+
|
| 268 |
+
Ablation Study The diffusion guidance's effectiveness is validated when compared with the deformation approach (Figure 6). Here, We further ablate the local mask and multi-step strategies in Figure 7. (1) When local editing is not applied, the entire scene is blurred, and the edits fail. This is due to the optimization issue: inconsistent edits will create large floats in 3D Gaussians. (2) For drag steps, we compare three different drag steps from [1, 5, 20], finding that more or fewer steps lead to different insights. When using a single step, the deformed Gaussians cannot give enough guidance to the diffusion model, resulting in broken edits. Thus, one-step drag editing usually meets challenges when we have more aggressive edits. When applying more steps (20 steps), the editing quality is slightly improved. This illustrates that 3DGS-Drag is robust when updated more times. However, since more steps will slow the execution, choosing an appropriate number of steps is better.
|
| 269 |
+
|
| 270 |
+
# 4.3 QUANTITATIVE EVALUATION
|
| 271 |
+
|
| 272 |
+
Quantitatively evaluating 3D editing results is often challenging since there lacks ground truth. Here, we use two metrics for evaluation: user preference and GPT score, shown in Figure 8. For user preference, we conducted a user study across 19 subjects and collected their preference for each edit. For the GPT score, since GPT with vision has been proven to be a human-aligned evaluator (Wu et al., 2024), we use gpt4-o to evaluate each editing, rating in 5 levels. Specifically, we measure (1) whether the content is correctly edited and (2) the rendered image quality for each method. Our method achieves the best results on all these metrics.
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
Figure 8: Quantitative evaluation: We conduct both user study and GPT evaluation on the editing results. Compared with Instruct-NeRF2NeRF (Haque et al., 2023) and PDS (Koo et al., 2024), 3DGS-Drag performs significantly better.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
# 4.4 DISCUSSION
|
| 280 |
+
|
| 281 |
+
Limitations. Similar to previous diffusion-based 3D editing methods (Chen et al., 2024b; Haque et al., 2023), our approach relies on the diffusion model to provide accurate guidance. Thus, our method may yield suboptimal results when the target object is too small within the field of view or when the scene exhibits considerable size and complexity. We also cannot deal with drag operations that are too aggressive. In such cases, the object may be relocated to areas with restricted visibility, which is out of vision for most views.
|
| 282 |
+
|
| 283 |
+
Running Time. When using 50 views for editing, our method needs 15 minutes. Specifically, about 2 minutes are needed for initial diffusion model fine-tuning, and 13 minutes are needed for the rest of the editing process. In comparison, Instruct-NeRF2NeRF (Haque et al., 2023) needs one hour. The running time is tested on a single RTX 4090 GPU.
|
| 284 |
+
|
| 285 |
+
# 5 CONCLUSION
|
| 286 |
+
|
| 287 |
+
In this paper, we introduced 3DGS-Drag, an intuitive drag editing approach for 3D scenes. In contrast to previous work (Haque et al., 2023; Dong & Wang, 2023; Wang et al., 2023), which mainly focuses on appearance, we address the challenge of geometry-related content editing. Empirical experiments show that our method can achieve highly detailed edits across various scenes. Such an advantage stems primarily from our two key contributions: the copy-and-paste Gaussian deformation and the diffusion correction. We showcase that our method enables previously challenging edits, paving the way for exploring new possibilities in 3D editing.
|
| 288 |
+
|
| 289 |
+
# ACKNOWLEDGMENTS
|
| 290 |
+
|
| 291 |
+
This work was supported in part by NSF Grant 2106825, NIFA Award 2020-67021-32799, the Toyota Research Institute, the IBM-Illinois Discovery Accelerator Institute, the Amazon-Illinois Center on AI for Interactive Conversational Experiences, Snap Inc., and the Jump ARCHES endowment through the Health Care Engineering Systems Center at Illinois and the OSF Foundation. This work used computational resources, including the NCSA Delta and DeltaAI supercomputers through allocations CIS220014 and CIS230012 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, as well as the TACC Frontera supercomputer, Amazon Web Services (AWS), and OpenAI API through the National Artificial Intelligence Research Resource (NAIRR) Pilot.
|
| 292 |
+
|
| 293 |
+
# REPRODUCIBILITY STATEMENT
|
| 294 |
+
|
| 295 |
+
Our code is released at https://github.com/Dongjiahua/3DGS-Drag. For the implementation details, we have covered our mathematical details in Sec. 3.3 and training details in Sec. 4.1. The framework architecture is fully introduced in Sec. 3. All the datasets we used are publicly available, as explained in Sec. 4.1.
|
| 296 |
+
|
| 297 |
+
# REFERENCES
|
| 298 |
+
|
| 299 |
+
Rameen Abdal, Peihao Zhu, Niloy J Mitra, and Peter Wonka. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics, 2021.
|
| 300 |
+
Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. SINE: Semantic-driven image-based NeRF editing with prior-guided editing field. In CVPR, 2023.
|
| 301 |
+
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In CVPR, 2022.
|
| 302 |
+
Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In CVPR, 2023.
|
| 303 |
+
Jun-Kun Chen and Yu-Xiong Wang. Proedit: Simple progression is all you need for high-quality 3d scene editing. NeurIPS, 2024.
|
| 304 |
+
Jun-Kun Chen, Jipeng Lyu, and Yu-Xiong Wang. Neuraleditor: Editing neural radiance fields via manipulating point clouds. In CVPR, 2023.
|
| 305 |
+
Jun-Kun Chen, Samuel Rota Bulò, Norman Müller, Lorenzo Porzi, Peter Kontschieder, and Yu-Xiong Wang. Consistdreamer: 3d-consistent 2d diffusion for high-fidelity scene editing. In CVPR, 2024a.
|
| 306 |
+
Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. In CVPR, 2024b.
|
| 307 |
+
Pei-Ze Chiang, Meng-Shiun Tsai, Hung-Yu Tseng, Wei-Sheng Lai, and Wei-Chen Chiu. Stylizing 3D scene via implicit representation and hypernetwork. In WACV, 2022.
|
| 308 |
+
Jiahua Dong and Yu-Xiong Wang. Vica-nerf: View-consistency-aware 3d editing of neural radiance fields. In NeurIPS, 2023.
|
| 309 |
+
Shaocong Dong, Lihe Ding, Zhanpeng Huang, Zibin Wang, Tianfan Xue, and Dan Xu. Interactive3d: Create what you want by interactive 3d generation. In CVPR, pp. 4999-5008, 2024.
|
| 310 |
+
Yuki Endo. User-controllable latent transformer for stylegan image layout editing. In Computer Graphics Forum, 2022.
|
| 311 |
+
Jiemin Fang, Junjie Wang, Xiaopeng Zhang, Lingxi Xie, and Qi Tian. Gaussianeditor: Editing 3d gaussians delicately with text instructions. In CVPR, 2024.
|
| 312 |
+
|
| 313 |
+
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014.
|
| 314 |
+
Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct-nerf2nerf: Editing 3d scenes with instructions. In ICCV, 2023.
|
| 315 |
+
Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. Ganspace: Discovering interpretable gan controls. In NeurIPS, 2020.
|
| 316 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In ICLR, 2022.
|
| 317 |
+
Hsin-Ping Huang, Hung-Yu Tseng, Saurabh Saini, Maneesh Singh, and Ming-Hsuan Yang. Learning to stylize novel views. In ICCV, 2021.
|
| 318 |
+
Yi-Hua Huang, Yue He, Yu-Jie Yuan, Yu-Kun Lai, and Lin Gao. StylizedNeRF: Consistent 3D scene stylization as stylized NeRF via 2D-3D mutual learning. In CVPR, 2022.
|
| 319 |
+
Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. Sc-gs: Sparse-controlled gaussian splatting for editable dynamic scenes. In CVPR, 2024.
|
| 320 |
+
Clément Jambon, Bernhard Kerbl, Georgios Kopanas, Stavros Diolatzis, Thomas Leimkuhler, and George Drettakis. NeRFshop: Interactive editing of neural radiance fields. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2023.
|
| 321 |
+
Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019.
|
| 322 |
+
Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. In CVPR, 2023.
|
| 323 |
+
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 2023.
|
| 324 |
+
Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics, 2017.
|
| 325 |
+
Juil Koo, Chanho Park, and Minhyuk Sung. Posterior distillation sampling. In CVPR, 2024.
|
| 326 |
+
Thomas Leimkuhler and George Drettakis. Freestylegan: Free-view editable portrait rendering with the camera manifold. In SIGGRAPH Asia, 2021.
|
| 327 |
+
Jingyi Lu, Xinghui Li, and Kai Han. Regiondrag: Fast region-based image editing with diffusion models. In ECCV, 2024.
|
| 328 |
+
Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In ICLR, 2022.
|
| 329 |
+
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 2021.
|
| 330 |
+
Thu Nguyen-Phuoc, Feng Liu, and Lei Xiao. SNeRF: Stylized neural implicit representations for 3D scenes. In WACV, 2022.
|
| 331 |
+
Shen Nie, Hanzhong Allan Guo, Cheng Lu, Yuhao Zhou, Chenyu Zheng, and Chongxuan Li. The blessing of randomness: Sde beats ode in general diffusion-based image editing. In ICLR, 2024.
|
| 332 |
+
Xingang Pan, Ayush Tewari, Thomas Leimkuhler, Lingjie Liu, Abhimitra Meka, and Christian Theobalt. Drag your gan: Interactive point-based manipulation on the generative image manifold. In SIGGRAPH, 2023.
|
| 333 |
+
Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In ICLR, 2023.
|
| 334 |
+
|
| 335 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021.
|
| 336 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with CLIP latents. In arXiv preprint arXiv:2204.06125, 2022.
|
| 337 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
|
| 338 |
+
Nataniel Ruiz, Yuzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, 2023.
|
| 339 |
+
Sitian Shen, Jing Xu, Yuheng Yuan, Xingyi Yang, Qiuhong Shen, and Xinchao Wang. Draggaussian: Enabling drag-style manipulation on 3d gaussian representation. In CVPR, 2024.
|
| 340 |
+
Yujun Shi, Chuhui Xue, Jiachun Pan, Wenqing Zhang, Vincent YF Tan, and Song Bai. Dragdiffusion: Harnessing diffusion models for interactive point-based image editing. In CVPR, 2024.
|
| 341 |
+
Olga Sorkine. Laplacian mesh processing. Eurographics (State of the Art Reports), 2005.
|
| 342 |
+
Olga Sorkine-Hornung and Marc Alexa. As-rigid-as-possible surface modeling. In Eurographics Symposium on Geometry Processing, 2007.
|
| 343 |
+
Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. In ICLR, 2024.
|
| 344 |
+
Can Wang, Ruixiang Jiang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Nerf-art: Text-driven neural radiance fields stylization. IEEE Transactions on Visualization and Computer Graphics, 2023.
|
| 345 |
+
Qiling Wu, Jianchao Tan, and Kun Xu. PaletteNeRF: Palette-based color editing for NeRFs. In CVPR, 2023.
|
| 346 |
+
Tong Wu, Guandao Yang, Zhibing Li, Kai Zhang, Ziwei Liu, Leonidas Guibas, Dahua Lin, and Gordon Wetzstein. Gpt-4v (ision) is a human-aligned evaluator for text-to-3d generation. In CVPR, 2024.
|
| 347 |
+
Tianhao Xie, Eugene Belilovsky, Sudhir Mudur, and Tiberiu Popa. Dragd3d: Vertex-based editing for realistic mesh deformations using 2d diffusion priors. In arXiv preprint arXiv:2310.04561, 2023.
|
| 348 |
+
Tianyi Xie, Zeshun Zong, Yuxing Qiu, Xuan Li, Yutao Feng, Yin Yang, and Chenfanfu Jiang. Phys-gaussian: Physics-integrated 3d gaussians for generative dynamics. In CVPR, pp. 4389-4398, 2024.
|
| 349 |
+
Tianhan Xu and Tatsuya Harada. Deforming radiance fields with cages. In ECCV, 2022.
|
| 350 |
+
Seungwoo Yoo, Kunho Kim, Vladimir G Kim, and Minhyuk Sung. As-plausible-as-possible: Plausibility-aware mesh deformation using 2d diffusion priors. In CVPR, pp. 4315-4324, 2024.
|
| 351 |
+
Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. Nerf-editing: geometry editing of neural radiance fields. In CVPR, 2022.
|
| 352 |
+
Kai Zhang, Nick Kolkin, Sai Bi, Fujun Luan, Zexiang Xu, Eli Shechtman, and Noah Snavely. ARF: Artistic radiance fields. In ECCV, 2022.
|
| 353 |
+
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
|
| 354 |
+
Jingyu Zhuang, Chen Wang, Liang Lin, Lingjie Liu, and Guanbin Li. Dreameditor: Text-driven 3d scene editing with neural fields. In SIGGRAPH Asia, 2023.
|
| 355 |
+
|
| 356 |
+
# A DEMOVIDEO
|
| 357 |
+
|
| 358 |
+
A demo video of our framework description and editing results is included in the supplementary material.
|
| 359 |
+
|
| 360 |
+
# B ADDITIONAL EXPERIMENTS
|
| 361 |
+
|
| 362 |
+
# B.1 QUALITATIVE RESULTS ON LARGE OBJECT MOVEMENTS
|
| 363 |
+
|
| 364 |
+
We further conduct experiments on object movements, specifically including large movements of both regular-sized and large objects. As shown in Figure 9, our method succeeds in handling long-distance movements, such as repositioning the flowerpot. Furthermore, for very large objects like the truck and the table, our approach effectively moves them in a specified direction while minimizing artifacts in the removed and placed regions. These results showcase the generalizability of our method.
|
| 365 |
+
|
| 366 |
+

|
| 367 |
+
|
| 368 |
+

|
| 369 |
+
|
| 370 |
+

|
| 371 |
+
|
| 372 |
+

|
| 373 |
+
|
| 374 |
+

|
| 375 |
+
User Edits
|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
Rendered Drag Results
|
| 379 |
+
Figure 9: Additional qualitative results for larger movements and larger objects: Our method succeeds in longer-range movements like moving the flowerpot and large object movements like moving the table.
|
| 380 |
+
|
| 381 |
+
# B.2 ABLATION ON DATASET EDITING STRATEGY
|
| 382 |
+
|
| 383 |
+
To validate the importance of our dataset editing strategy, we conduct a comparison between using our annealed dataset editing and the iterative dataset editing (Haque et al., 2023). As shown in Figure 10, editing one frame each time cannot change the geometry due to inconsistent constraints from other unedited views, leading to a degenerated result in the original scene. Instead, our method can successfully make the edits.
|
| 384 |
+
|
| 385 |
+
# B.3 QUANTITATIVE ABLATION ON LOCAL EDITING
|
| 386 |
+
|
| 387 |
+
To further demonstrate the effectiveness of our local editing, we conduct quantitative evaluation on the "extend sleeves" edit. Specifically, we calculate the similarity between the rendered edited result and the originally rendered image in the unedited pixels. As shown in Table 1, our local editing strategy demonstrates a strong capability to preserve the unedited regions and backgrounds effectively.
|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
User Edit
|
| 391 |
+
|
| 392 |
+

|
| 393 |
+
Annealed Dataset Editing (Ours)
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
Instruct-NeRF2NeRF Dataset Editing
|
| 397 |
+
Figure 10: Ablation on dataset editing strategies: Iterative dataset editing from Instruct-NeRF2NeRF (Haque et al., 2023) leads to degenerated results. In contrast, our annealed dataset editing maintains the geometry change.
|
| 398 |
+
|
| 399 |
+
<table><tr><td></td><td>SSIM↑</td><td>PSNR↑</td><td>LPIPS↓</td></tr><tr><td>Local Editing</td><td>0.995</td><td>43.43</td><td>0.004</td></tr><tr><td>Non-Local Editing</td><td>0.901</td><td>24.44</td><td>0.158</td></tr></table>
|
| 400 |
+
|
| 401 |
+
Table 1: Quantitative ablation on local editing: Our local editing strategy demonstrates a strong capability to preserve the unedited regions and backgrounds effectively.
|
| 402 |
+
|
| 403 |
+
# B.4 SCALED USER STUDY
|
| 404 |
+
|
| 405 |
+
To improve the generalizability of our user study, we increased the number of participants from 19 to 99. As shown in Figure 11, the key conclusion that our method surpasses previous baselines remains the same.
|
| 406 |
+
|
| 407 |
+

|
| 408 |
+
User Preference
|
| 409 |
+
Figure 11: Scaled user study with 99 participants: Our method still achieves significantly better preference over the baselines.
|
| 410 |
+
|
| 411 |
+
# B.5 COMPARISON WITH 2D DRAG METHODS
|
| 412 |
+
|
| 413 |
+
We conduct qualitative comparisons on 2D drag edits to verify our method's effectiveness. Specifically, we focus on our deformation-guided diffusion editing quality, comparing the result with the previous 2D drag methods, Dragdiffusion (Shi et al., 2024) and SDE-Drag (Nie et al., 2024). As shown in Figure 12, DragDiffusion tends to create many more artifacts and unrelated textures, while our result is cleaner and matches better with the edit. Considering SDE-Drag (Nie et al., 2024), it fails to move the leg correctly and instead generates an object at the location. The results show that our method can achieve better consistency by directly operating at the image level.
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
User Edit
|
| 417 |
+
|
| 418 |
+

|
| 419 |
+
DragDiffusion
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
SDE-Drag
|
| 423 |
+
|
| 424 |
+

|
| 425 |
+
Ours
|
| 426 |
+
|
| 427 |
+

|
| 428 |
+
Figure 12: Comparison on 2D drag results: Compared with recent 2D drag methods, our method does not need a time-consuming inverse-forward process and produces more consistent results. In comparison, the baseline DragDiffusion generates noisy legs and floor. SDE-Drag succeeds in maintaining the background but inserts objects in the hand and does not correctly move the leg.
|
| 429 |
+
User Edits
|
| 430 |
+
|
| 431 |
+

|
| 432 |
+
Rendered Drag Results
|
| 433 |
+
Figure 13: Limitation on generating unseen side: When dragging background objects with an unseen side to the foreground, the results from other views are incorrect.
|
| 434 |
+
|
| 435 |
+
# C IN-DEPTH DISCUSSION OF LIMITATIONS
|
| 436 |
+
|
| 437 |
+
Our method encounters two specific failure cases: generating the unseen side of an object and dragging objects outside the border area. Here, we present qualitative results to further elucidate these limitations.
|
| 438 |
+
|
| 439 |
+
# C.1 GENERATING UNSEEN SIDE
|
| 440 |
+
|
| 441 |
+
As shown in Figure 13, when moving the wooden support to the foreground, the unseen portions (e.g., the back of the object) are rendered incorrectly and exhibit noticeable artifacts. This occurs because there are no 3D Gassians to represent the unseen back side. Consequently, the deformed result contains meaningless patterns that cannot be corrected through the diffusion process.
|
| 442 |
+
|
| 443 |
+
# C.2 DRAGGING OBJECTS OUTSIDE THE BORDER
|
| 444 |
+
|
| 445 |
+
As shown in Figure 14, we struggle to refine the artifacts when only a portion of the object is visible at the border. This is because the diffusion model has less capability to correct the boundary part due to the ambiguity in interpolating the part to the whole. In addition, since it is observed by sparser views, it raises further challenges for optimizing 3D Gaussians.
|
| 446 |
+
|
| 447 |
+
# C.3 POTENTIAL BIASES IN QUANTITATIVE EVALUATIONS
|
| 448 |
+
|
| 449 |
+
While we employ both human preference scores and automated metrics from GPT evaluation, there could still be potential biases, such as those arising from the participant selection process or the specific version and training data of the GPT models used. This challenge is common in generative
|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
Figure 14: Limitation on dragging objects outside the border: Refining and optimizing become challenging when dragging towards the unseen or border area.
|
| 453 |
+
|
| 454 |
+

|
| 455 |
+
|
| 456 |
+
modeling, where there is no ground truth. Future work would benefit from conducting larger-scale human studies with more diverse participant pools and developing more comprehensive evaluation protocols that can better assess both geometric accuracy and visual fidelity of 3D edits.
|
| 457 |
+
|
| 458 |
+
# D ADDITIONAL DEFORMATION DETAILS
|
| 459 |
+
|
| 460 |
+
The calculation of relative rotation $\Delta q_h^{ik}$ is briefly described as follows. Given handle points $p_h^i$ and $p_h^k$ and their corresponding target points $p_t^i$ and $p_t^k$ respectively, we firstly calculate the unit vectors as
|
| 461 |
+
|
| 462 |
+
$$
|
| 463 |
+
v _ {h} ^ {i k} = \frac {p _ {h} ^ {k} - p _ {h} ^ {i}}{\| p _ {h} ^ {k} - p _ {h} ^ {i} \|} \quad \text {a n d} \quad v _ {t} ^ {i k} = \frac {p _ {t} ^ {k} - p _ {t} ^ {i}}{\| p _ {t} ^ {k} - p _ {t} ^ {i} \|} \tag {8}
|
| 464 |
+
$$
|
| 465 |
+
|
| 466 |
+
Next, we compute the cross product and the dot product of these two unit vectors:
|
| 467 |
+
|
| 468 |
+
$$
|
| 469 |
+
\mathbf {r} = v _ {h} ^ {i k} \times v _ {t} ^ {i k}, \quad s = v _ {h} ^ {i k} \cdot v _ {t} ^ {i k} \tag {9}
|
| 470 |
+
$$
|
| 471 |
+
|
| 472 |
+
We then construct the quaternion $\Delta q_h^{ik}$ by combining the dot product and cross product:
|
| 473 |
+
|
| 474 |
+
$$
|
| 475 |
+
\Delta q _ {h} ^ {i k} = \left[ s, \mathbf {r} _ {x}, \mathbf {r} _ {y}, \mathbf {r} _ {z} \right] \tag {10}
|
| 476 |
+
$$
|
| 477 |
+
|
| 478 |
+
Finally, we standardize and normalize the quaternion to ensure it has unit length.
|
| 479 |
+
|
| 480 |
+
# E SOCIAL IMPACT AND FUTURE WORK
|
| 481 |
+
|
| 482 |
+
Future Work. For future work, we plan to extend current progressive editing capabilities to generate 3D animations. As 3DGS-Drag is able to move or modify objects progressively, it is possible to generate long-term trajectories and human motions. In addition, we will focus on improving the model's scalability to accommodate larger scenes with dynamic objects and shadow effects.
|
| 483 |
+
|
| 484 |
+
Potential Social Impact. The potential societal impact of 3DGS-Drag spans across multiple dimensions. Designed as a fine-grained editing model, our 3DGS-Drag offers convenient manipulation of 3D scenes and robust support for AR applications. In addition, given the rapid development and widespread adoption of 3D Gaussians, our method seamlessly integrates with this ecosystem. With its user-friendly interface requiring only the selection of handle and target points, our model is accessible even to untrained individuals.
|
3dgsdragdragginggaussiansforintuitivepointbased3dediting/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fead2b713ff7751238b53a562fc24d7d656bc050bfd9e09841023ca26e93385a
|
| 3 |
+
size 938603
|
3dgsdragdragginggaussiansforintuitivepointbased3dediting/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fb2ab9d442cafb8fb44c062f4374af4181f912e992444722fb438d503d5a430d
|
| 3 |
+
size 559790
|
3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/5b6c955e-a799-4e63-99df-a2223da2aaec_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f08c0e19370e6166953c90d294e590c602291b0ca121639d92ed5aec4d7f1394
|
| 3 |
+
size 87594
|
3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/5b6c955e-a799-4e63-99df-a2223da2aaec_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e401a321cf1eb8267573edc1966c245041b603baf1374f27f2049ec752572e29
|
| 3 |
+
size 110068
|
3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/5b6c955e-a799-4e63-99df-a2223da2aaec_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:84c6888f3f71503b448c8b69723b97eb9515a9ca3abf77aeb7d714565e272b2e
|
| 3 |
+
size 11657616
|
3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/full.md
ADDED
|
@@ -0,0 +1,407 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3DITSCENE: EDITING ANY SCENE VIA LANGUAGE-GUIDED DISENTANGLED GAUSSIAN SPLATTING
|
| 2 |
+
|
| 3 |
+
Qihang Zhang
|
| 4 |
+
|
| 5 |
+
CUHK
|
| 6 |
+
|
| 7 |
+
qzhang@link.cuhk.edu.hk
|
| 8 |
+
|
| 9 |
+
Yinghao Xu
|
| 10 |
+
|
| 11 |
+
Stanford
|
| 12 |
+
|
| 13 |
+
yhexu@stanford.edu
|
| 14 |
+
|
| 15 |
+
Chaoyang Wang
|
| 16 |
+
|
| 17 |
+
Snap Inc.
|
| 18 |
+
|
| 19 |
+
cwang9@snapchat.com
|
| 20 |
+
|
| 21 |
+
Hsin-Ying Lee
|
| 22 |
+
|
| 23 |
+
Snap Inc.
|
| 24 |
+
|
| 25 |
+
hlee5@snapchat.com
|
| 26 |
+
|
| 27 |
+
Gordon Wetzstein
|
| 28 |
+
|
| 29 |
+
Stanford
|
| 30 |
+
|
| 31 |
+
gordon.wetzstein@stanford.edu
|
| 32 |
+
|
| 33 |
+
Bolei Zhou
|
| 34 |
+
|
| 35 |
+
UCLA
|
| 36 |
+
|
| 37 |
+
bolei@cs.ucla.edu
|
| 38 |
+
|
| 39 |
+
Ceyuan Yang
|
| 40 |
+
|
| 41 |
+
ByteDance
|
| 42 |
+
|
| 43 |
+
limbo0066@gmail.com
|
| 44 |
+
|
| 45 |
+
"Move the girl, then delete the girl and Rotate the camera"
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
"Rotate the camera then delete the dog"
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
"Rotate the dog"
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
"Remove the headscarf then move the camera"
|
| 57 |
+
|
| 58 |
+

|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
"Move the boy outside"
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
Figure 1: Image pairs edited by 3DitScene. Our method is capable of simultaneously handling various types of edits in both 2D and 3D spaces.
|
| 69 |
+
|
| 70 |
+

|
| 71 |
+
|
| 72 |
+

|
| 73 |
+
|
| 74 |
+

|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
|
| 78 |
+
# ABSTRACT
|
| 79 |
+
|
| 80 |
+
Scene image editing is crucial for entertainment, photography, and advertising design. Existing methods solely focus on either 2D individual object or 3D global scene editing. This results in a lack of a unified approach to effectively control and manipulate scenes at the 3D level with different levels of granularity. In this work, we propose 3DitScene, a novel and unified scene editing framework leveraging language-guided disentangled Gaussian Splatting that enables seamless editing from 2D to 3D, allowing precise control over scene composition and individual objects. We first incorporate 3D Gaussians that are refined through generative priors and optimization techniques. Language features from CLIP then introduce semantics into 3D geometry for object disentanglement. With the disentangled
|
| 81 |
+
|
| 82 |
+
Gaussians, 3DitScene allows for manipulation at both the global and individual levels, revolutionizing creative expression and empowering control over scenes and objects. Experimental results demonstrate the effectiveness and versatility of 3DitScene in scene image editing. Code is available at https://github.com/zqh0253/3DitScene.
|
| 83 |
+
|
| 84 |
+
# 1 INTRODUCTION
|
| 85 |
+
|
| 86 |
+
Editing scene images is of great importance in various fields, ranging from entertainment, professional photography and advertising design. Content editing allows to create immersive and captivating experiences for audiences, convey the artistic vision effectively and achieve the desired aesthetic outcomes. With the rapid development of deep generative modeling, many attempts have been made to edit an image effectively. However, they have encountered limitations that hindered their potential.
|
| 87 |
+
|
| 88 |
+
Previous methods primarily concentrate on scene editing in 2D image space. They commonly rely on generative priors, such as GANs and Diffusion Models (DM), and employ techniques like modification of cross-attention mechanisms (Hertz et al., 2022; 2023), and optimization of network parameters (Kim et al., 2022; Kawar et al., 2023; Ruiz et al., 2023; Gal et al., 2022; Chen et al., 2023b) to edit the appearance and object identity within scene images. While some efforts have been made to extend these methods to 3D editing, they ignore 3D cues and pose a challenge in maintaining 3D consistency, especially when changing the camera pose. Moreover, these approaches typically focus on global scenes and lack the ability to disentangle objects accurately, resulting in limited control over individual objects at the 3D level.
|
| 89 |
+
|
| 90 |
+
In order to edit any scene images and enable 3D control over both scene and its individual objects, we propose 3DitScene, a novel scene editing framework which leverages a new scene representation, language-guided disentangled Gaussian Splatting. Concretely, the given image is first projected into 3D Gaussians which are further refined and enriched through 2D generative prior (Rombach et al., 2022; Poole et al., 2022). We thus obtain a comprehensive 3D scene representation that naturally enables novel view synthesis for a given image. In addition, language features from CLIP are distilled into the corresponding 3D Gaussians to introduce semantics into 3D geometry. These semantic 3D Gaussians help disentangle individual objects out of the entire scene representation, resulting in language-guided disentangled Gaussians for scene decomposition. They also allow for a more user-friendly interaction i.e., users could query specific objects or interest via text. To this end, our 3DitScene enables seamless editing from 2D to 3D and allow for modifications at both the global and individual levels, empowering creators to have precise control over scene composition, and object-level edits.
|
| 91 |
+
|
| 92 |
+
We dub our pipeline as 3DitScene. Different from previous works that focus on addressing a single type of editing, 3DitScene integrates editing requirements within a unified framework. Our teasier figure demonstrates the versatility of 3DitScene by showcasing its application to diverse scene images. We have conducted evaluations of 3DitScene under various settings, and the results demonstrate significant improvements over baseline methods.
|
| 93 |
+
|
| 94 |
+
# 2 RELATED WORK
|
| 95 |
+
|
| 96 |
+
Image Editing with Generative Models. The field of 2D image synthesis has advanced significantly with the development of generative models such as GANs (Karras et al., 2021; 2019) and diffusion models (Rombach et al., 2022; Song et al., 2020; Ho et al., 2020). Many studies capitalize on the rich prior knowledge embedded in generative models for image editing. Some endeavors utilize GANs for various image editing tasks, including image-to-image translation, latent manipulation (Shen et al., 2020; Yang et al., 2021; Zhu et al., 2020; Xu et al., 2021; Jahanian et al., 2019), and text-guided manipulation (Patashnik et al., 2021). However, due to limitations in training on large-scale data, GANs often struggle to perform well on real-world scene images. As diffusion models make notable progress, the community is increasingly focusing on harnessing the potent text-to-image diffusion model for real image editing (Kim et al., 2022; Kawar et al., 2023; Ruiz et al., 2023; Gal et al., 2022; Chen et al., 2023b; Hertz et al., 2022; 2023; Meng et al., 2021b;
|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
Figure 2: 3DitScene training pipeline. Given input view, we first initialize 3DGS by lifting pixels to 3D space and then expand it over novel views by RGB and depth inpainting. Semantic features are then distilled into 3D Gaussians to achieve object-level disentanglement.
|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
Figure 3: 3DitScene Inference pipeline. User can query object of interest via language prompt. Enabled by the disentangled 3D representation, user can change camera viewpoint, and manipulate the object of interest in a flexible manner.
|
| 103 |
+
|
| 104 |
+
Su et al., 2022). However, these methods are confined to the 2D domain and are limited in editing objects within a 3D space. Concurrently, other research efforts (Yenphraphai et al., 2024a) attempt to address 3D-aware image editing, but they introduces inconsistency in the editing process, and cannot change the camera perspective of the entire scene. Wang et al. (2024a); Chen et al. (2024a); Ye et al. (2023); Palandra et al. (2024); Wu et al. (2024a); Wang et al. (2024b); Jaganathan et al. (2024) focus on editing with a given 3DGS scene, but is limited in the types of edits they support. In contrast, our model leverages an explicit 3D Gaussian to convert 2D images into 3D space while disentangling objects with language guidance. This approach enables our model not only to consistently perform 3D-aware object editing but also facilitates scene-level novel-view synthesis.
|
| 105 |
+
|
| 106 |
+
Single-view 3D Scene Synthesis. Among 3D scenes generation (Zhang et al., 2023b; Hollein et al., 2023; Chung et al., 2023; Chen et al., 2023a;c; Mao et al., 2023; Epstein et al., 2024), conditional generation on a single-view presents an unique challenge. Previous approaches address this challenge by training a versatile model capable of inferring a 3D representation of a scene based on a single input image (Wiles et al., 2020; Tucker & Snavely, 2020; Hu et al., 2021; Han et al., 2022; Flynn et al., 2019; Li et al., 2021; Hong et al., 2023; Yu et al., 2021). However, these methods demand extensive datasets for training and tend to produce blurry textures when confronted with significant changes in camera viewpoints. Recently, several works have embraced diffusion priors (Liu et al., 2023; Chan et al., 2023; Xu et al., 2023; Gu et al., 2023; Tang et al., 2023; Qian et al., 2023; Chen et al., 2024b) to acquire a probabilistic distribution for unseen views, leading to better synthesis results. Nevertheless, these methods often concentrate on object-centric scenes or lack 3D consistency. Our approach connect 2D images and 3D scenes with explicit 3D Gaussians and incorporate diffusion knowledge, which overcome the mentioned challenges.
|
| 107 |
+
|
| 108 |
+
# 3 METHOD
|
| 109 |
+
|
| 110 |
+
Our target is to propose a 3D-aware scene image editing framework (Fig. 2) that allows simultaneous control over the camera and objects. To accomplish this, Sec. 3.1 introduces a novel scene representation called language-guided disentangled Gaussian splatting. In order to achieve object-level control, Sec. 3.2 further distills language features into the Gaussian splatting representation,
|
| 111 |
+
|
| 112 |
+
achieving disentanglement at the object level. We elaborate the optimization process in Sec. 3.3 and demonstrate the flexible user control enabled by our framework during inference in Sec. 3.4.
|
| 113 |
+
|
| 114 |
+
# 3.1 3D GAUSSIAN SPLATTING FROM SINGLE IMAGE
|
| 115 |
+
|
| 116 |
+
Preliminary. 3D Gaussian Splatting (3DGS) (Kerbl et al., 2023) has been proved effective in both reconstructive (Luiten et al., 2023; Yang et al., 2023) and generative setting (Zou et al., 2023; Tang et al., 2023). It represents a 3D scene via a set of explicit 3D Gaussians. Each 3D Gaussian describes its location by a center vector $\mathbf{x} \in \mathbb{R}^3$ , a scaling factor $\mathbf{s} \in \mathbb{R}^3$ , a rotation quaternion $\mathbf{q} \in \mathbb{R}^4$ , and also stores an opacity value $\alpha \in \mathbb{R}$ and spherical harmonics (SH) coefficients $\mathbf{c} \in \mathbb{R}^k$ ( $k$ represents the degrees of freedom of SH) for volumetric rendering. All the above parameters can be denoted as $\Theta = \{\mathbf{x}_i, \mathbf{s}_i, \mathbf{q}_i, \alpha_i, \mathbf{c}_i | i \in [0, \dots, N-1]\}$ , where $N$ is the number of 3D Gaussians. A tile-based rasterizer is used to render these Gaussians into 2D image.
|
| 117 |
+
|
| 118 |
+
Image-to-3DGS initialization. Given an input image $\mathbf{I} \in \mathbb{R}^{3 \times H \times W}$ , an off-the-shelf depth prediction model is applied to estimate its depth map $\mathbf{D} \in \mathbb{R}^{H \times W}$ . Then, we could transform image pixels into 3D space, forming the corresponding 3D point clouds:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\mathcal {P} = \phi_ {2 \rightarrow 3} (\mathbf {I}, \mathbf {D}, \mathbf {K}, \mathbf {T}), \tag {1}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
where $\mathbf{K}$ and $\mathbf{T}$ are camera intrinsic and extrinsic matrices respectively. Such point clouds $\mathcal{P}$ are then used to initialize the 3DGS by directly copying the location and color values, with other GS-related parameters randomly initialized. To refine the 3DGS's appearance, we adopt a reconstruction loss:
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\mathcal {L} _ {\text {r e c o n}} = \| \mathbf {I} - f (\mathcal {P}, \mathbf {K}, \mathbf {T}) \| _ {2} ^ {2}, \tag {2}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where $f$ is the rendering function.
|
| 131 |
+
|
| 132 |
+
We further enhance the rendered quality by leveraging prior knowledge from image generative foundation model, namely Stable Diffusion (Rombach et al., 2022). It provides update direction to the images rendered by the current 3DGS in the form of Score Distillation Sampling (Poole et al., 2022) loss, denoted as $\mathcal{L}_{\mathrm{SDS}}$ .
|
| 133 |
+
|
| 134 |
+
3DGS expansion by inpainting. When camera perspectives changes, rendered views will contain holes due to occlusion or new region outside the original view frustum. We use Stable Diffusion to inpaint the uncovered regions. Then, the newly added pixels need to be accurately transformed into 3D space to align seamlessly with the existing 3D Gaussians.
|
| 135 |
+
|
| 136 |
+
Previous methods (Chung et al., 2023; Hollein et al., 2023; Yu et al., 2023) first predict the depth values, and then use heuristic methods to adjust the values to align with the existing 3D structure. However, relying on heuristic methods often overlooked various scenarios, leading to artifacts such as depth discontinuities or shape deformations.
|
| 137 |
+
|
| 138 |
+
Instead, we propose a novel method to lifted novel contents to 3D while ensuring seamless alignment without any heuristic procedures. The key insight is to treat the problem as an image inpainting task, and utilize state-of-the-art diffusion-based depth estimation models (Ke et al., 2024; Fu et al., 2024; Yang et al., 2024) as a prior to solve the task. During denoising steps, rather than using models to predict the noise over the entire image, we employ the forward diffusion process to determine the value of fixed areas (Meng et al., 2021a). This approach guarantees the final result, after denoising, adheres to the depth of original fixed parts, ensuring smooth expansion.
|
| 139 |
+
|
| 140 |
+
After smooth 3DGS expansion via depth inpainting, we take the imagined novel views as reference views, and apply reconstruction loss $\mathcal{L}_{\mathrm{recon}}$ to supervise the updated 3DGS. SDS loss $\mathcal{L}_{\mathrm{SDS}}$ is adopted for views rendered from camera perspectives that are interpolated between the user-provided viewpoint and the newly imagined views.
|
| 141 |
+
|
| 142 |
+
# 3.2 LANGUAGE-GUIDED DISENTANGLED GAUSSIAN SPLATING
|
| 143 |
+
|
| 144 |
+
Based on the 3DGS built from single input image, users can generate novel views. In this section, we further distill CLIP (Radford et al., 2021) language feature to 3D Gaussians. This introduces semantics into 3D geometry, which helps disentangle individual objects out of the entire scene representation.
|
| 145 |
+
|
| 146 |
+
Language feature distillation. We augment each 3D Gaussian with a language embedding $\mathbf{e} \in \mathbb{R}^C$ , where $C$ denotes the number of the channels. Similar to RGB image $\mathbf{I}$ , a 2D semantic feature map $\mathbf{E} \in \mathbb{R}^{C \times H \times W}$ can also be rendered by the rasterizer. To learn the embedding, we first use Segment Anything Model (SAM) (Kirillov et al., 2023; Zhang et al., 2023a) to get semantic masks $\mathbf{M}_i$ . Then, we can obtain embedding of each object $\mathbf{I} \odot \mathbf{M}_i$ and supervise the corresponding region on rendered feature map $\mathbf{E}$ , according to the distillation loss:
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\mathcal {L} _ {\text {d i s t i l l}} = \sum_ {i} \left\| \left(\mathbf {E} - g \left(\mathbf {I} \odot \mathbf {M} _ {i}\right)\right) \odot \mathbf {M} _ {i} \right\| _ {2} ^ {2}, \tag {3}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
where $g$ is the CLIP's image encoder, and $\odot$ denotes element-wise multiplication. Following LangSplat (Qin et al., 2024), we additionally train an autoencoder to compress the embedding space to optimize the memory consumption of language embedding $e$ .
|
| 153 |
+
|
| 154 |
+
Scene decomposition. After distillation, we can decompose the scene into different objects. This enables user to query and ground specific object, and perform editing over single object (e.g. translation, rotation, removal, re-stylizing).
|
| 155 |
+
|
| 156 |
+
It is worth noting that such scene decomposition property not only enables more flexible edits during inference stage, but also provides augmentation over scene layouts during the optimization process. Since now we can query and render each object independently, we apply random translation, rotation, and removal over objects. This augmentation over the scene layout leads to a significant improvement in the appearance of occluded regions, ultimately enhancing the overall quality of the edited views (see Sec. 4.4).
|
| 157 |
+
|
| 158 |
+
# 3.3 TRAINING
|
| 159 |
+
|
| 160 |
+
The overall training objective can be expressed as:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\mathcal {L} = \lambda_ {\text {r e c o n}} \mathcal {L} _ {\text {r e c o n}} + \lambda_ {\text {S D S}} \mathcal {L} _ {\text {S D S}} + \lambda_ {\text {d i s t i l l}} \mathcal {L} _ {\text {d i s t i l l}}, \tag {4}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
where $\lambda_{\mathrm{recon}}$ , $\lambda_{\mathrm{SDS}}$ and $\lambda_{\mathrm{distill}}$ are coefficients that balance each loss term.
|
| 167 |
+
|
| 168 |
+
# 3.4 INFERENCE
|
| 169 |
+
|
| 170 |
+
Due to the disentangled nature of our representation, users can now interact with and manipulate objects in a flexible manner. Here, we mainly discuss prompting objects via two different modalities:
|
| 171 |
+
|
| 172 |
+
Text prompt. Users can query an object through text prompts as shown in Fig. 3. Following LERF (Kerr et al., 2023) and LangSplat (Qin et al., 2024), we calculate the relevancy score score between the language embedding e in the 3D Gaussians and the embedding of the text prompt $\mathbf{e}_l$ as:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\operatorname {s c o r e} = \min _ {i} \frac {\exp (\mathbf {e} \cdot \mathbf {e} _ {l})}{\exp (\mathbf {e} \cdot \mathbf {e} _ {l}) + \exp (\mathbf {e} \cdot \mathbf {e} _ {\text {c a n o n}} ^ {i})}, \tag {5}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where $\mathbf{e}_{\mathrm{canon}}^i$ is the CLIP embeddings of canonical phrases including "object", "things", "stuff", and "texture". Gaussians that have relevance scores below a predefined threshold are excluded. The remaining part is identified as the object of user interest.
|
| 179 |
+
|
| 180 |
+
Bounding box. Users can also select an object by drawing an approximate bounding box around it on the input image. We group 3D Gaussians within the bounding box by K-Means clustering, and discard clusters whose number of Gaussians does not exceed a threshold proportion.
|
| 181 |
+
|
| 182 |
+
In the meantime, user can also adjust the camera by specifying intrinsic and extrinsic parameters.
|
| 183 |
+
|
| 184 |
+
# 4 EXPERIMENTS
|
| 185 |
+
|
| 186 |
+
# 4.1 SETTINGS
|
| 187 |
+
|
| 188 |
+
Implementation details. To lift an image to 3D, we use GeoWizard (Fu et al., 2024) to estimate its relative depth. Stable Diffusion (Rombach et al., 2022)'s inpainting pipeline is adopted to generate new content for 3DGS's expansion. We leverage MobileSAM (Zhang et al., 2023a) and
|
| 189 |
+
|
| 190 |
+
Table 1: User study result. We report the percentage of favorite users for the consistency and quality of images edited by each method
|
| 191 |
+
|
| 192 |
+
<table><tr><td></td><td></td><td>AnyDoor</td><td>Ojbelt 3DIT</td><td>Image Sculpting</td><td>Ours</td></tr><tr><td rowspan="2">Consistency</td><td>Human</td><td>5.1</td><td>16.8</td><td>12.7</td><td>65.4</td></tr><tr><td>GPT4-v</td><td>0.0</td><td>6.7</td><td>31.3</td><td>62.0</td></tr><tr><td rowspan="2">Quality</td><td>Human</td><td>10.4</td><td>0.5</td><td>25.1</td><td>64.0</td></tr><tr><td>GPT4-v</td><td>6.7</td><td>13.3</td><td>39.2</td><td>40.8</td></tr></table>
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
view 1
|
| 202 |
+
Figure 4: Visualization of rendered images and feature maps. For each sample, we show three views of rendered images and feature maps. To demonstrate the disentangled scene representation, we use the language embedding to select a foreground object and render it exclusively.
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
view 2
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
view 3
|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
disentangled object
|
| 230 |
+
|
| 231 |
+
OpenCLIP (Ilharco et al., 2021) to segment and compute rendered views' feature maps, which are further leveraged to supervise the language embedding of 3D Gaussians. We use Stable Diffusion to perform Score Distillation Sampling (Poole et al., 2022) during optimization. Given the already decent image quality at the start of optimization benefited from explicit 3DGS initialization, we adopt a low classifier-free guidance (Ho & Salimans, 2022) scale.
|
| 232 |
+
|
| 233 |
+
Baselines. We compare our method with following scene image editing works: (1) AnyDoor (Chen et al., 2023b) is a 2D diffusion-based model that can teleport target objects into given scene images. It leverages Stable Diffusion's powerful image generative prior by finetuning upon it. (2) Object 3DIT (Michel et al., 2024) is designed for 3D-aware object-centric image editing via language instructions. It finetunes Stable Diffusion over a synthetic dataset containing pairs of original image, language instruction, and edited image. (3) Image Sculpting (Yenphraphai et al., 2024b) is also designed for 3D-aware object-centric image editing. It estimates a 3D model from an object in the input image to enable precise 3D control over the geometry. It also uses Stable Diffusion to refine the edited image quality. (4) AdaMPI (Han et al., 2022) focuses on the control over camera perspective. It leverages monocular depth estimation and color inpainting with learned adaptive layered depth representations. (5) LucidDreamer (Chung et al., 2023) tackles novel view synthesis by querying Stable Diffusion's inpainting pipeline with dense camera trajectories.
|
| 234 |
+
|
| 235 |
+
# 4.2 QUANTITATIVE RESULTS
|
| 236 |
+
|
| 237 |
+
We conduct a user study to compare the edited results by our method with the established baselines. We generate 20 samples for each method and request users to vote for their preferred method based on consistency with the original image and quality for each sample. We collect feedback from
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
Figure 5: Comparison results of object-centric manipulation. We apply translation, resizing, and removal over foreground objects.
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
Figure 6: Comparison results of camera control. We show two views with different camera perspectives for each method.
|
| 244 |
+
|
| 245 |
+
25 users, and report the result in Tab. 1. Our method consistently outperforms previous baselines in terms of both consistency and image quality. As recommended in a previous study (Wu et al., 2024b), GPT4-v has the ability to evaluate 3D consistency and image quality. Therefore, we include GPT-4v as an additional criterion. The preference of GPT4-v is well aligned with human preference, which once again demonstrates the superiority of 3DitScene.
|
| 246 |
+
|
| 247 |
+
# 4.3 QUALITATIVE RESULTS
|
| 248 |
+
|
| 249 |
+
Fig. 4 showcases the generated novel views with their respective feature maps produced by our framework. The feature maps demonstrate remarkable accuracy in capturing the semantic content of the images. This ability to distinctly separate semantic information plays a crucial role in achieving precise object-level control In the following, we demonstrate flexible editing over scene images enabled by our framework, and also compare with baseline methods.
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
Figure 7: Ablation results for layout augmentation during optimization. To evaluate the degree of object-level disentanglement, we conduct object removal for each sample. The top row displays the input image, while the next two rows showcase the edited scene
|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
Figure 8: Ablation results for loss terms. We show rendered novel views under different loss settings. The left column lists the input image. In right columns, two views are shown for each configuration. The quality degrades when reconstruction or SDS loss term is discarded
|
| 256 |
+
|
| 257 |
+
Object manipulation. Since different methods define object manipulation, particularly translation operations, in different coordinate systems<sup>1</sup>, it becomes challenging to evaluate them under a unified and fair setting. Therefore, we evaluate each method under its own specific setting to achieve the best possible result. As shown in Fig. 5, AnyDoor struggles to maintain object identity and 3D consistency when manipulating object layouts, primarily due to the absence of 3D cues. Object 3DIT, trained on synthetic datasets, exhibits limited generalization ability to real images. By leveraging a 3D model derived from the input image, Image Sculpting achieves better results. Nonetheless, it encounters issues with inconsistency when manipulating objects.
|
| 258 |
+
|
| 259 |
+
In contrast, our method delivers satisfactory 3D-aware object-level editing results. It maintains accurate 3D consistency of edited objects after rearranging their layout. Additionally, it preserves occlusion relationships within the scene, such as moving the girl to be partially occluded by a foreground object in the last row example.
|
| 260 |
+
|
| 261 |
+
Camera control. We compare our methods with AdaMPI and LucidDreamer for camera control. As illustrated in Fig. 6, AdaMPI only focuses on scenarios where the camera zooms in, and does not consider novel view synthesis. Therefore, this approach is not suitable for 3D-aware image editing when large camera control is required. LucidDreamer also leverages Stable Diffusion's inpainting capacity for novel view synthesis. However, it suffers from sudden transitions in the content within the frame (see sample in the bottom line). It also requires dense camera poses. In contrast, our method only needs as few as three camera poses and enables smooth transitions from the input view to novel views, enhancing user control over the camera perspective.
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
Figure 9: Ablation results for depth inpainting. The first row shows images with their corresponding depth maps (available on the left half). The second and third row display the depth map predicted by heuristic alignment, and our depth inpainting method respectively.
|
| 265 |
+
|
| 266 |
+
# 4.4 ABLATION STUDY
|
| 267 |
+
|
| 268 |
+
Layout augmentation during optimization. As our representation disentangles at object level, we could perform layout augmentation during optimization. Here, we investigate whether disentanglement property benefits the optimization process. We use the task of removing objects to evaluate the degree of disentanglement.
|
| 269 |
+
|
| 270 |
+
As illustrated in Fig. 7, when layout augmentation is disabled during optimization, floating artifacts can be observed. We discover that these Gaussians lie inside the object. They are occluded by Gaussians at the surface. As they do not contribute to the rendering result, they are consequently not updated by gradient descent during optimization, leaving their language embeddings unsupervised.
|
| 271 |
+
|
| 272 |
+
In contrast, when applying layout augmentation during optimization, such Gaussians will be exposed when the foreground object is moved away, and hence updated. With this ablation, it is concluded that the disentanglement property of the proposed representation not only enables more flexible inference, but also contributes to the optimization process.
|
| 273 |
+
|
| 274 |
+
Loss terms. During optimization, we adopt three loss terms: $\mathcal{L}_{\mathrm{recon}}$ , $\mathcal{L}_{\mathrm{SDS}}$ , and $\mathcal{L}_{\mathrm{distill}}$ . $\mathcal{L}_{\mathrm{distill}}$ plays a critical role in distilling language embedding into 3D. The remaining two terms focus on enhancing the visual quality of images. As illustrated in Fig. 8, the image quality degrades severely without $\mathcal{L}_{\mathrm{recon}}$ or $\mathcal{L}_{\mathrm{SDS}}$ . Without $\mathcal{L}_{\mathrm{recon}}$ , the image is only refined by the SDS loss, which creates discrepancies with the original image. When the CFG value is set low, 5 as default, the image appears lacking in details and exhibits unusual texture patterns. Increasing the CFG value introduces more details, yet leads to inconsistencies with the original image, while the issue of strange texture patterns persists. Additionally, only applying $\mathcal{L}_{\mathrm{recon}}$ results in floating artifacts and blurriness across the entire image. In conclusion, both SDS and reconstruction loss are crucial for achieving decent image quality.
|
| 275 |
+
|
| 276 |
+
Depth inpainting. When expanding 3DGS at novel views, we need to estimate the depth map of unseen regions. Here, we compare our inpainting-based depth estimation with heuristic-based method. Fig. 9 show images with depth map available in the left part. The task is to predict the depth map of the right part. Method relying on heuristic alignment results to artifacts like depth discontinuity. In contrast, our proposed method is capable of producing accurate depth maps that align well with the left known part.
|
| 277 |
+
|
| 278 |
+
# 5 CONCLUSION AND DISCUSSION
|
| 279 |
+
|
| 280 |
+
We present a novel framework, 3DitScene, for scene image editing. Our primary objective is to facilitate 3D-aware editing of both objects and the entire scene within a unified framework.
|
| 281 |
+
|
| 282 |
+
We achieve this by leveraging a new scene representation, language-guided disentangled scene representation. This representation is learnt by distilling CLIP's language feature into 3D Gaussians. The semantic 3D Gaussians effectively disentangle individual objects out of the entire scene, thereby enabling localized object editing. We test 3DitScene under different settings and prove its superiority compared to previous methods.
|
| 283 |
+
|
| 284 |
+
# REFERENCES
|
| 285 |
+
|
| 286 |
+
Eric R Chan, Koki Nagano, Matthew A Chan, Alexander W Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, and Gordon Wetzstein. Generative novel view synthesis with 3d-aware diffusion models. ICCV, 2023.
|
| 287 |
+
Dave Zhenyu Chen, Haoxuan Li, Hsin-Ying Lee, Sergey Tulyakov, and Matthias Nießner. Scenetex: High-quality texture synthesis for indoor scenes via diffusion priors. arXiv preprint arXiv:2311.17261, 2023a.
|
| 288 |
+
Xi Chen, Lianghua Huang, Yu Liu, Yujun Shen, Deli Zhao, and Hengshuang Zhao. Anydoor: Zero-shot object-level image customization. arXiv preprint arXiv:2307.09481, 2023b.
|
| 289 |
+
Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21476-21485, 2024a.
|
| 290 |
+
Zhaoxi Chen, Guangcong Wang, and Ziwei Liu. Scenedreamer: Unbounded 3d scene generation from 2d image collections. arXiv preprint arXiv:2302.01330, 2023c.
|
| 291 |
+
Zilong Chen, Feng Wang, Yikai Wang, and Huaping Liu. Text-to-3d using gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21401-21412, 2024b.
|
| 292 |
+
Jaeyoung Chung, Suyoung Lee, Hyeongjin Nam, Jaerin Lee, and Kyoung Mu Lee. Luciddreamer: Domain-free generation of 3d gaussian splatting scenes. arXiv preprint arXiv:2311.13384, 2023.
|
| 293 |
+
Dave Epstein, Ben Poole, Ben Mildenhall, Alexei A Efros, and Aleksander Holynski. Disentangled 3d scene generation with layout learning. arXiv preprint arXiv:2402.16936, 2024.
|
| 294 |
+
John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snively, and Richard Tucker. Deepview: View synthesis with learned gradient descent. In CVPR, 2019.
|
| 295 |
+
Xiao Fu, Wei Yin, Mu Hu, Kaixuan Wang, Yuexin Ma, Ping Tan, Shaojie Shen, Dahua Lin, and Xiaoxiao Long. Geowizard: Unleashing the diffusion priors for 3d geometry estimation from a single image. arXiv preprint arXiv:2403.12013, 2024.
|
| 296 |
+
Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022.
|
| 297 |
+
Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M Susskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi. Nerfdiff: Single-image view synthesis with nef-guided distillation from 3d-aware diffusion. In ICML, 2023.
|
| 298 |
+
Yuxuan Han, Ruicheng Wang, and Jiaolong Yang. Single-view view synthesis in the wild with learned adaptive multiplane images. In ACM SIGGRAPH Conference Proceedings, 2022.
|
| 299 |
+
Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022.
|
| 300 |
+
Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel Cohen-Or. Style aligned image generation via shared attention. arXiv preprint arXiv:2312.02133, 2023.
|
| 301 |
+
|
| 302 |
+
Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
|
| 303 |
+
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 2020.
|
| 304 |
+
Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson, and Matthias Nießner. Text2room: Extracting textured 3d meshes from 2d text-to-image models. arXiv preprint arXiv:2303.11989, 2023.
|
| 305 |
+
Yicong Hong, Kai Zhang, Jiuxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. arXiv preprint arXiv:2311.04400, 2023.
|
| 306 |
+
Ronghang Hu, Nikhila Ravi, Alexander C Berg, and Deepak Pathak. Worldsheet: Wrapping the world in a 3d sheet for view synthesis from a single image. In ICCV, 2021.
|
| 307 |
+
Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, July 2021. URL https://doi.org/10.5281/ zenodo.5143773. If you use this software, please cite it as below.
|
| 308 |
+
Vishnu Jaganathan, Hannah Hanyun Huang, Muhammad Zubair Irshad, Varun Jampani, Amit Raj, and Zsolt Kira. Ice-g: Image conditional editing of 3d gaussian splats. arXiv preprint arXiv:2406.08488, 2024.
|
| 309 |
+
Ali Jahanian, Lucy Chai, and Phillip Isola. "On the" steerability" of generative adversarial networks. arXiv preprint arXiv:1907.07171, 2019.
|
| 310 |
+
Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019.
|
| 311 |
+
Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. In NeurIPS, 2021.
|
| 312 |
+
Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. In CVPR, 2023.
|
| 313 |
+
Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, and Konrad Schindler. Repurposing diffusion-based image generators for monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9492-9502, 2024.
|
| 314 |
+
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4), 2023.
|
| 315 |
+
Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. *Lerf: Language embedded radiance fields. In CVPR*, pp. 19729-19739, 2023.
|
| 316 |
+
Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffusionclip: Text-guided diffusion models for robust image manipulation. In CVPR, 2022.
|
| 317 |
+
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
|
| 318 |
+
Jiaxin Li, Zijian Feng, Qi She, Henghui Ding, Changhu Wang, and Gim Hee Lee. Mine: Towards continuous depth MPI with nerf for novel view synthesis. In ICCV, 2021.
|
| 319 |
+
Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9298-9309, 2023.
|
| 320 |
+
Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan. Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis. arXiv preprint arXiv:2308.09713, 2023.
|
| 321 |
+
|
| 322 |
+
Weijia Mao, Yan-Pei Cao, Jia-Wei Liu, Zhongcong Xu, and Mike Zheng Shou. Showroom3d: Text to high-quality 3d room generation using 3d priors. arXiv preprint arXiv:2312.13324, 2023.
|
| 323 |
+
Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073, 2021a.
|
| 324 |
+
Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073, 2021b.
|
| 325 |
+
Oscar Michel, Anand Bhattachad, Eli VanderBilt, Ranjay Krishna, Aniruddha Kembhavi, and Tanmay Gupta. Object 3dit: Language-guided 3d-aware image editing. Advances in Neural Information Processing Systems, 36, 2024.
|
| 326 |
+
Francesco Palandra, Andrea Sanchietti, Daniele Baieri, and Emanuele Rodolà. Gsedit: Efficient text-guided editing of 3d objects via gaussian splatting. arXiv preprint arXiv:2403.05154, 2024.
|
| 327 |
+
Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In CVPR, 2021.
|
| 328 |
+
Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022.
|
| 329 |
+
Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, HsinYing Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al. Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors. arXiv preprint arXiv:2306.17843, 2023.
|
| 330 |
+
Minghan Qin, Wanhua Li, Jiawei Zhou, Haoqian Wang, and Hanspeter Pfister. Langsplat: 3d language gaussian splatting. In CVPR, 2024.
|
| 331 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021.
|
| 332 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
|
| 333 |
+
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, 2023.
|
| 334 |
+
Yujun Shen, Ceyuan Yang, Xiaou Tang, and Bolei Zhou. Interfacegan: Interpreting the disentangled face representation learned by gans. IEEE TPAMI, 2020.
|
| 335 |
+
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
|
| 336 |
+
Xuan Su, Jiaming Song, Chenlin Meng, and Stefano Ermon. Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382, 2022.
|
| 337 |
+
Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. arXiv preprint arXiv:2309.16653, 2023.
|
| 338 |
+
Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In CVPR, 2020.
|
| 339 |
+
Junjie Wang, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, and Qi Tian. Gaussianeditor: Editing 3d gaussians delicately with text instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20902-20911, 2024a.
|
| 340 |
+
|
| 341 |
+
Yuxuan Wang, Xuanyu Yi, Zike Wu, Na Zhao, Long Chen, and Hanwang Zhang. View-consistent 3d editing with gaussian splatting. arXiv preprint arXiv:2403.11868, 20, 2024b.
|
| 342 |
+
Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In CVPR, 2020.
|
| 343 |
+
Jing Wu, Jia-Wang Bian, Xinghui Li, Guangrun Wang, Ian Reid, Philip Torr, and Victor Adrian Prisacariu. Gaussctrl: multi-view consistent text-driven 3d gaussian splatting editing. arXiv preprint arXiv:2403.08733, 2024a.
|
| 344 |
+
Tong Wu, Guandao Yang, Zhibing Li, Kai Zhang, Ziwei Liu, Leonidas Guibas, Dahua Lin, and Gordon Wetzstein. Gpt-4v (isdiction) is a human-aligned evaluator for text-to-3d generation. arXiv preprint arXiv:2401.04092, 2024b.
|
| 345 |
+
Yinghao Xu, Yujun Shen, Jiapeng Zhu, Ceyuan Yang, and Bolei Zhou. Generative hierarchical features from synthesizing images. In CVPR, 2021.
|
| 346 |
+
Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, et al. Dmv3d: Denoising multi-view diffusion using 3d large reconstruction model. arXiv preprint arXiv:2311.09217, 2023.
|
| 347 |
+
Ceyuan Yang, Yujun Shen, and Bolei Zhou. Semantic hierarchy emerges in deep generative representations for scene synthesis. IJCV, 2021.
|
| 348 |
+
Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything: Unleashing the power of large-scale unlabeled data. arXiv preprint arXiv:2401.10891, 2024.
|
| 349 |
+
Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. arXiv preprint arXiv:2309.13101, 2023.
|
| 350 |
+
Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. Gaussian grouping: Segment and edit anything in 3d scenes. arXiv preprint arXiv:2312.00732, 2023.
|
| 351 |
+
Jiraphon Yenphraphai, Xichen Pan, Sainan Liu, Daniele Panozzo, and Saining Xie. Image sculpting: Precise object editing with 3d geometry control. arXiv preprint arXiv:2401.01702, 2024a.
|
| 352 |
+
Jiraphon Yenphraphai, Xichen Pan, Sainan Liu, Daniele Panozzo, and Saining Xie. Image sculpting: Precise object editing with 3d geometry control. arXiv preprint arXiv:2401.01702, 2024b.
|
| 353 |
+
Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In CVPR, 2021.
|
| 354 |
+
Hong-Xing Yu, Haoyi Duan, Junhwa Hur, Kyle Sargent, Michael Rubinstein, William T Freeman, Forrester Cole, Deqing Sun, Noah Snavelly, Jiajun Wu, et al. Wonderjourney: Going from anywhere to everywhere. arXiv preprint arXiv:2312.03884, 2023.
|
| 355 |
+
Chaoning Zhang, Dongshen Han, Yu Qiao, Jung Uk Kim, Sung-Ho Bae, Seungkyu Lee, and Choong Seon Hong. Faster segment anything: Towards lightweight sam for mobile applications. arXiv preprint arXiv:2306.14289, 2023a.
|
| 356 |
+
Qihang Zhang, Chaoyang Wang, Aliaksandr Siarohin, Peiye Zhuang, Yinghao Xu, Ceyuan Yang, Dahua Lin, Bolei Zhou, Sergey Tulyakov, and Hsin-Ying Lee. Scenewiz3d: Towards text-guided 3d scene composition. arXiv preprint arXiv:2312.08885, 2023b.
|
| 357 |
+
Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. In-domain gan inversion for real image editing. In ECCV, 2020.
|
| 358 |
+
Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. arXiv preprint arXiv:2312.09147, 2023.
|
| 359 |
+
|
| 360 |
+
# APPENDIX
|
| 361 |
+
|
| 362 |
+
# A IMPLEMENTATION DETAILS
|
| 363 |
+
|
| 364 |
+
Scene initialization. We first utilize GeoWizard (Fu et al., 2024) to estimate the relative depth of the input image. Next, we lift the image into 3D space based on this depth and perform 300 SDS steps. The camera azimuth angle is randomly sampled from $[-15^{\circ}, 15^{\circ}]$ . After 300 steps, we employ Stable Diffusion (Rombach et al., 2022)'s inpainting pipeline to generate new content for the expansion of 3DGS. Specifically, we inpaint the rendered views at azimuth angles of $-15^{\circ}$ and $15^{\circ}$ . Finally, we use GeoWizard (Fu et al., 2024) again to estimate the depth of the newly added regions and lift them into 3D space.
|
| 365 |
+
|
| 366 |
+
SDS optimization. We perform 1500 SDS steps to optimize the whole scene. We randomly sample the diffusion time step from $[l, r]$ , where $l = 0.02$ , and $r$ starts at 0.5 and gradually decreases to 0.2 by the 1000th step. We use guidance strength of 5 for classifier-free guidance.
|
| 367 |
+
|
| 368 |
+
**Coefficients.** In Eq. (4), we choose $\lambda_{\mathrm{recon}} = 1000$ , $\lambda_{\mathrm{SDS}} = 0.01$ , and $\lambda_{\mathrm{distill}} = 1$ .
|
| 369 |
+
|
| 370 |
+
# B EDITING PIPELINE AND USER INTERFACE
|
| 371 |
+
|
| 372 |
+
To enhance the user experience, we explore two types of editing frameworks:
|
| 373 |
+
|
| 374 |
+
1. Large language model (LLM) based. We prompt the LLM to take the user's editing request as input and parse it into predefined components, including camera movement angles, descriptions of the object of interest, and the transformation matrix for that object. Our editing algorithm then uses these components as input to optimize the 3D scene and carry out the necessary edits.
|
| 375 |
+
|
| 376 |
+
However, relying solely on the LLM as the interface for our editing algorithm has its drawbacks; it makes it difficult for users to interactively manipulate and edit the optimized 3D scene. To address this, we have also developed a user interface that facilitates these interactions.
|
| 377 |
+
|
| 378 |
+
2. User interface (UI) based. We developed an user interface for editing. We have implemented a web-based interactive panel that allows users to control the editing process. Fig. A1 provides an overview of the user interface. To use it, users need to upload an image, specify the object of interest, and provide text prompts, as illustrated in Fig. A2. They can then visualize the optimization process in real time. After the optimization is complete, users can use draggable sliders for various edits, including moving, removing, and rotating objects, as shown in Fig. A3, Fig. A4, and Fig. A5.
|
| 379 |
+
|
| 380 |
+
Both types of editing workflows do not require users to interact with the code during the entire process.
|
| 381 |
+
|
| 382 |
+

|
| 383 |
+
Figure A1: Overview of the user interface. To provide better user experience, we have developed a web-based interface. Users simply need to upload the input image and enter a text prompt. They can then visualize the optimization process in real time and edit the scene using sliders.
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
Figure A2: Procedure to optimize a scene via the user interface. To optimize a 3D scene for 3D-aware image editing, users need to upload an image, specify the object of interest, and provide text prompts. They can then visualize the optimization process in real time.
|
| 387 |
+
|
| 388 |
+

|
| 389 |
+
Figure A3: Interactive editing via the user interface. In this sample, we demonstrate that by sliding the bar, users can adjust the object's offset along the Y coordinate.
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
Figure A4: Interactive editing via the user interface. In this sample, we demonstrate that by sliding the bar, users can adjust the object's offset along the X coordinate.
|
| 397 |
+
|
| 398 |
+

|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
|
| 402 |
+

|
| 403 |
+
Figure A5: Interactive editing via the user interface. In this sample, we demonstrate that by sliding the bar, users can adjust the object's rotation.
|
| 404 |
+
|
| 405 |
+

|
| 406 |
+
|
| 407 |
+

|
3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02b977841505db00d221d6000b59cd140fb9e717bd0c2d6e7a34214aa80b950b
|
| 3 |
+
size 1180044
|
3ditsceneeditinganyscenevialanguageguideddisentangledgaussiansplatting/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:362493db61438ea4a12fff0c194f52ee71e2114f6d101bdde1d43dacb3caecc0
|
| 3 |
+
size 463360
|
3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/5336a4e7-b106-4ad1-a8ae-4707462c1dc6_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9950030c53f54fd879ab764c0783d3b75960d086ff9ee017f2796f273fed61b5
|
| 3 |
+
size 127688
|
3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/5336a4e7-b106-4ad1-a8ae-4707462c1dc6_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b705177574d900ffab32a6bd87fca18f12464eca9eb20b797620e5a11d3a8f48
|
| 3 |
+
size 159496
|
3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/5336a4e7-b106-4ad1-a8ae-4707462c1dc6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:10c87b58999857f1f358a35d3a7f6d8e9851e0b83c853d44a8e8d92881cd00cc
|
| 3 |
+
size 1609165
|
3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/full.md
ADDED
|
@@ -0,0 +1,439 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3DMOLFOMER: A DUAL-CHANNEL FRAMEWORK FOR STRUCTURE-BASED DRUG DISCOVERY
|
| 2 |
+
|
| 3 |
+
Xiuyuan Hu $^{1}$ , Guoqing Liu $^{2*}$ , Can (Sam) Chen $^{3,4}$ , Yang Zhao $^{1}$ , Hao Zhang $^{1*}$ , Xue Liu $^{3,4}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Electronic Engineering, Tsinghua University, $^{2}$ Microsoft Research AI for Science, McGill University, $^{4}$ Mila - Quebec AI Institute
|
| 6 |
+
|
| 7 |
+
huxy22@mails.tsinghua.edu.cn, guoqingliu@microsoft.com, can.chen@mila.quebec, zhao-yang@tsinghua.edu.cn, haozhang@tsinghua.edu.cn, xueliu@cs.mcgill.ca
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Structure-based drug discovery, encompassing the tasks of protein-ligand docking and pocket-aware 3D drug design, represents a core challenge in drug discovery. However, no existing work can deal with both tasks to effectively leverage the duality between them, and current methods for each task are hindered by challenges in modeling 3D information and the limitations of available data. To address these issues, we propose 3DMolFormer, a unified dual-channel transformer-based framework applicable to both docking and 3D drug design tasks, which exploits their duality by utilizing docking functionalities within the drug design process. Specifically, we represent 3D pocket-ligand complexes using parallel sequences of discrete tokens and continuous numbers, and we design a corresponding dual-channel transformer model to handle this format, thereby overcoming the challenges of 3D information modeling. Additionally, we alleviate data limitations through large-scale pre-training on a mixed dataset, followed by supervised and reinforcement learning fine-tuning techniques respectively tailored for the two tasks. Experimental results demonstrate that 3DMolFormer outperforms previous approaches in both protein-ligand docking and pocket-aware 3D drug design, highlighting its promising application in structure-based drug discovery. The code is available at: https://github.com/HXYfighter/3DMolFormer.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
In recent years, the application of machine learning in drug discovery has gained significant traction (Mak et al., 2023), achieving substantial advancements in tasks such as molecular property prediction (Zhang et al., 2021; Wang et al., 2021; Wieder et al., 2020), protein structure prediction (Jumper et al., 2021; Baek et al., 2021; Lin et al., 2023), and drug molecular design (Olivecrona et al., 2017; Luo et al., 2021; Fu et al., 2022; Du et al., 2022; 2024). These developments hold the promise of dramatically enhancing the efficiency of drug development processes (Blanco-Gonzalez et al., 2023). Notably, the transformer architecture, which has seen breakthroughs in natural language processing (NLP) (Devlin et al., 2019; Brown et al., 2020), has been successfully adapted for molecular representation learning (Zhou et al., 2023a; Gao et al., 2024a), protein-ligand interaction prediction (Zhao et al., 2022; Abramson et al., 2024), and molecular generation tasks (Bagal et al., 2021; Hu et al., 2023).
|
| 16 |
+
|
| 17 |
+
Structure-based drug discovery (SBDD) is one of the most critical strategies in drug discovery practices, relying on theories of drug-receptor interactions to study the complexes formed between protein pockets and small molecule ligands (Van Montfort & Workman, 2017). SBDD encompasses two core tasks: (1) protein-ligand binding pose prediction (docking), which involves predicting the 3D binding conformation of a ligand given the 3D structure of a protein and the 2D representation of the ligand (Yang et al., 2022), and (2) pocket-aware 3D drug design, which entails designing 3D drug molecules that bind well (with low binding energy) to a given pocket target on a protein
|
| 18 |
+
|
| 19 |
+
structure (Zhang et al., 2023b; Isert et al., 2023b). These two tasks are inherently dual, and one is predictive, while the other is generative.
|
| 20 |
+
|
| 21 |
+
However, as of now, the application of machine learning in these two SBDD tasks remains widely recognized as a challenge (Pala & Clark, 2024). The accuracy and generalization of protein-ligand docking methods are still unsatisfactory (Morehead et al., 2024), and pocket-aware 3D drug design approaches have not achieved obvious improvements by explicitly utilizing 3D structural information compared to 2D methods (Zheng et al., 2024). This predicament can be attributed to three primary reasons:
|
| 22 |
+
|
| 23 |
+
- Underutilized duality: Protein-ligand docking and pocket-aware 3D drug design are naturally dual tasks, and improvements in docking performance could directly benefit drug design. However, since these two tasks are different in type (predictive vs. generative), this duality has unfortunately not been leveraged by previous machine learning approaches.
|
| 24 |
+
- Challenges in modeling 3D information: Modeling 3D information is a key difficulty in SBDD, as protein sequences and small molecule graphs contain only discrete information, whereas 3D coordinates are continuous values. Merging these two modalities of information has proven challenging (Zhu et al., 2022).
|
| 25 |
+
- Limited data: Ground-truth data on protein-ligand complexes are scarce. Currently, the largest dataset, PDBbind (Liu et al., 2017), contains fewer than 20,000 complexes, which is insufficient for training a robust machine learning model.
|
| 26 |
+
|
| 27 |
+
To address these challenges, we propose 3DMolFormer, a unified transformer-based framework for both of the two SBDD tasks. First, to fulfill the input-output causal relationships essential for both docking and 3D drug design, we introduce a parallel sequence format to represent a 3D complex of a protein pocket and a small molecule ligand, as shown in Figure 1 and 2, which comprises a token sequence for discrete protein atoms and small molecule SMILES, alongside a numerical sequence for 3D coordinates. Subsequently, we construct the 3DMolFormer model based on this parallel sequence, as illustrated in Figure 3, augmenting the GPT architecture (Radford et al., 2019) with a numerical head corresponding to the token head, enabling the model to be directly applied for autoregressive generation of the parallel sequences.
|
| 28 |
+
|
| 29 |
+
Due to data limitations, we utilize a "pre-training + fine-tuning" approach (Quinn et al., 2019) in NLP for 3DMolFormer, as large-scale pre-training helps mitigate these data challenges. During the pre-training phase, the model undergoes large-batch training (Keskar et al., 2017) on a large-scale mixed dataset, which includes data on protein pockets, ligands, and pocket-ligand complexes. A composite loss function is employed for autoregressive training of the parallel sequence, where cross-entropy loss applies to the token sequence and mean squared error loss applies to the numerical sequence. For the protein-ligand docking task, we perform supervised fine-tuning on the ground-truth pocket-ligand complexes, using the mean squared error of the numerical sequences corresponding to the ligand's 3D coordinates as the loss function. Moreover, to utilize the duality between the two SBDD tasks, for the pocket-aware drug design task, we apply a regularized maximum likelihood estimation loss for reinforcement learning fine-tuning, and leverage the weights fine-tuned for docking to generate the 3D coordinates of the small molecules.
|
| 30 |
+
|
| 31 |
+
Experimental results for protein-ligand docking demonstrate that 3DMolFormer outperforms all search-based and deep-learning docking baselines in binding pose prediction accuracy, particularly showing a reduction in samples with large prediction errors. Results for pocket-aware 3D drug design indicate that through a carefully designed composite reward function, 3DMolFormer can generate drug candidates that meet satisfactory levels of binding affinity (docking score), drug-likeness, and synthesizeability during the reinforcement learning process, in particular significantly surpassing existing state-of-the-art baselines in terms of binding affinity and success rates in meeting multi-objective criteria. These results reflect the outstanding performance of the 3DMolFormer framework in structure-based drug discovery.
|
| 32 |
+
|
| 33 |
+
In summary, our main contributions include:
|
| 34 |
+
|
| 35 |
+
- We propose 3DMolFormer, the first unified framework applicable to both protein-ligand docking and pocket-aware 3D drug design.
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
Figure 1: The parallel sequence of a protein pocket with 3D coordinates.
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
Figure 2: The parallel sequence of a small molecule ligand with 3D coordinates.
|
| 42 |
+
|
| 43 |
+
- We design a parallel sequence format for pocket-ligand complexes and establish a dual-channel transformer architecture to autoregressively generate this format, effectively addressing the challenges of modeling 3D information in SBDD.
|
| 44 |
+
- Through large-scale pre-training and respective fine-tuning, 3DMolFormer outperforms various previous baselines in both SBDD tasks.
|
| 45 |
+
|
| 46 |
+
# 2 RELATED WORKS
|
| 47 |
+
|
| 48 |
+
Molecular Pre-training The success of large-scale pre-training has extended from NLP to the field of drug discovery (Xia et al., 2023; Chen et al., 2023). Many studies focus on molecular representation learning, which maps molecular structures to informative embeddings for downstream predictive tasks (Yang et al., 2021a); Fang et al. (2022); Gao et al. (2024a). Several representation learning methods for protein-ligand binding have been proposed, including InteractionGraphNet (Jiang et al., 2021) and BindNet (Feng et al., 2024a), with Uni-Mol (Zhou et al., 2023a) collecting and pre-training on extensive 3D datasets of proteins and small molecules, achieving high accuracy in protein-ligand docking. Furthermore, models such as MolGPT (Bagal et al., 2021), Chemformer (Irwin et al., 2022), and BindGPT (Zholus et al., 2024) utilize pre-training to enhance molecular distribution learning, enabling applications in generative tasks.
|
| 49 |
+
|
| 50 |
+
Protein-ligand Docking Protein-ligand docking encompasses three sequential tasks: binding site prediction, binding pose prediction, and binding affinity prediction, with binding pose prediction being the most critical in structure-based drug discovery (Zhang et al., 2023b). Traditional search-based methods typically employ combinatorial optimization techniques to identify the best binding poses (known as targeted docking) within a given protein pocket, using tools such as AutoDock4 (Morris et al., 2009), AutoDock Vina (Trott & Olson, 2010; Eberhardt et al., 2021), and Smina (Koes et al., 2013), which are widely used in practical virtual screening. Recently, deep learning approaches have been introduced for this task, exemplified by DeepDock (Méndez-Lucio et al., 2021) and Uni-Mol (Zhou et al., 2023a). Additionally, various deep learning techniques for blind docking have emerged, which simultaneously predict binding sites and poses (Stärk et al., 2022; Lu et al., 2022; Zhang et al., 2023a; Pei et al., 2024; Corso et al., 2023; 2024). However, blind docking methods are primarily hindered by inaccuracies in binding site prediction, making direct comparisons with targeted docking methods less meaningful. Moreover, some end-to-end approaches that predict binding affinity without 3D poses fail to provide the crucial structural information required in SBDD (Wang et al., 2024a).
|
| 51 |
+
|
| 52 |
+
Pocket-aware 3D Drug Design Drug design is the ultimate goal of molecular design. Currently, most machine learning methods focus on generating 1D SMILES strings or 2D molecular graphs (Segler et al., 2018; Eckmann et al., 2022; Lee et al., 2024), with reinforcement learning
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Figure 3: Overview of 3DMolFormer. The left shows the dual-channel model architecture, the top right illustrates the input and output of the two SBDD tasks in a parallel sequence, and the bottom right outlines the pre-training and fine-tuning process.
|
| 56 |
+
|
| 57 |
+
being a popular paradigm (Olivecrona et al., 2017; You et al., 2018; Ahn et al., 2020; Jin et al., 2020; Simm et al., 2020; Yang et al., 2021b). However, these approaches can only output discrete information about atoms and chemical bonds, lacking the capability to generate 3D coordinate values, thus limiting their application in SBDD. In contrast, pocket-aware 3D drug design explicitly utilizes the 3D structures of protein targets to generate de novo small molecules with high binding affinity. Various machine learning techniques have been applied to pocket-aware 3D drug design, including genetic algorithms (e.g., AutoGrow (Spiegel & Durrant, 2020)), variational autoencoders (e.g., liGAN (Ragoza et al., 2022)), autoregressive models (e.g., AR (Luo et al., 2021), Pocket2Mol (Peng et al., 2022), Lingo3DMol (Feng et al., 2024b)), and flow models (GraphBP (Liu et al., 2022)). Recently, diffusion models have achieved state-of-the-art performance in this task, including DiffSBDD (Schneuing et al., 2022), TargetDiff (Guan et al., 2023a), and DecompDiff (Guan et al., 2023b). Notably, some studies have developed transformer-based 3D drug design models. The XYZ-transformer (Flam-Shepherd & Aspuru-Guzik, 2023) directly uses 3D coordinate values (retaining three decimal places) as tokens, while BindGPT (Zholus et al., 2024) decomposes the integer and decimal parts of coordinates into two tokens to reduce vocabulary size. Token-Mol (Wang et al., 2024b), on the other hand, employs torsion angles of small molecules instead of coordinate values to shorten sequence lengths. However, these methods represent values using discrete tokens, which disrupts the continuity of coordinates.
|
| 58 |
+
|
| 59 |
+
# 3 3DMOLFOMER
|
| 60 |
+
|
| 61 |
+
# 3.1 FORMAT OF POCCKET AND LIGAND SEQUENCES WITH 3D COORDINATES
|
| 62 |
+
|
| 63 |
+
To leverage a causal language model for handling 3D protein pockets and small molecules while explicitly separating discrete structural information from continuous spatial coordinates, we design a parallel sequence format. This format consists of a discrete token sequence $s_{\mathrm{tok}}$ and a continuous numerical sequence $s_{\mathrm{num}}$ , both of which share the same length and align element-wise. The token sequence consists of tokens in a predefined vocabulary, while the numerical sequence contains floating-point values.
|
| 64 |
+
|
| 65 |
+
As shown in Figure 1, the sequence for a protein pocket $s^{\mathrm{poc}}$ consists of two parts: the first $s^{\mathrm{poc\_atoms}}$ represents an atomic list, and the second $s^{\mathrm{poc\_coord}}$ contains 3D coordinate information. The atomic list is encoded in the token sequence, which includes all atoms in the protein pocket except for
|
| 66 |
+
|
| 67 |
+
hydrogen atoms. Aside from alpha carbon atoms, denoted as 'CA', other atoms are represented by their element type, such as 'C', 'O', 'N', and 'S'. The sequence of atoms follows the order of the pdb file, where each amino acid begins with ['N', 'CA', 'C', 'O'] followed by the side-chain atoms. The normalized 3D coordinates for each atom in the atomic list are included in the numerical sequence in the same order, with each dimension ('x', 'y', 'z') occupying a separate position. The length of the 3D coordinate sequence is always three times the length of the atomic list. Moreover, in the token sequence, the start and end of the atomic list are marked by 'PS' and 'PE', while the 3D coordinates are delineated by 'PCS' and 'PCE' at the start and end, respectively. In the numerical sequence, numbers that do not correspond to 3D coordinates are padded with 1.0.
|
| 68 |
+
|
| 69 |
+
As illustrated in Figure 2, the sequence for a small molecule $s^{\mathrm{lig}}$ is similar to that of the protein pocket, comprising both a SMILES string section $s^{\mathrm{lig\_smiles}}$ and a 3D coordinate section $s^{\mathrm{lig\_coord}}$ . After atom-level tokenization (Schwaller et al., 2019), the SMILES string of the small molecule is encoded in the token sequence, excluding hydrogen atoms. It is important to note that some tokens may not correspond to atoms, and thus, no 3D coordinates will be associated with them. The normalized 3D coordinates for each atom in the tokenized SMILES string are included in the numerical sequence, with each coordinate dimension ('x', 'y', 'z') occupying a separate position. The length of the 3D coordinate sequence is always three times the number of atoms in the small molecule. In the token sequence, the start and end of the SMILES tokens are marked by 'LS' and 'LE', while the 3D coordinates of the corresponding atoms are marked by 'LCS' and 'LCE' at the start and end, respectively. In the numerical sequence, numbers not corresponding to 3D coordinates are similarly padded with 1.0.
|
| 70 |
+
|
| 71 |
+
When the sequence of a protein pocket is concatenated with that of a small molecule ligand, it forms a pocket-ligand complex sequence along with their 3D coordinates $s^{\mathrm{poc - lig}}$ . This sequence format offers three advantages:
|
| 72 |
+
|
| 73 |
+
- It fully encapsulates the structural and 3D coordinate information of both the protein pocket and the small molecule ligand.
|
| 74 |
+
- Discrete structural information and continuous numerical data are separated into two parallel sequences, enabling independent processing of each data type.
|
| 75 |
+
- The sequence of the pocket-ligand complex maintains causal logic. As depicted in the upper right of Figure 3, this sequence structure allows autoregressive prediction, which can effectively represent both pocket-ligand docking and pocket-aware drug design tasks.
|
| 76 |
+
|
| 77 |
+
Specifically, we normalize the coordinates of all pocket-ligand complexes by translating their center of mass to the origin $(0,0,0)$ . Additionally, to ensure numerical stability during training (Quinn et al., 2019), we scale the coordinate values by a factor $q > 1$ to reduce the range of their distribution:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\left(x _ {i} ^ {\prime}, y _ {i} ^ {\prime}, z _ {i} ^ {\prime}\right) = \left(\frac {x _ {i} - x _ {c}}{q}, \frac {y _ {i} - y _ {c}}{q}, \frac {z _ {i} - z _ {c}}{q}\right), \tag {1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where $(x_{i},y_{i},z_{i})$ is the original coordinate of the $i$ -th atom, $(x_{c},y_{c},z_{c})$ is the coordinate of the center of mass, and $(x_i',y_i',z_i')$ refers to the normalized values used in the numerical sequence.
|
| 84 |
+
|
| 85 |
+
# 3.2 MODEL ARCHITECTURE
|
| 86 |
+
|
| 87 |
+
To process the aforementioned parallel sequences, we require an autoregressive language model that can simultaneously take a discrete token sequence and a continuous floating-point sequence as input, while predicting both the next token and the next numerical value. Inspired by xVal (Golkar et al., 2023), we propose a dual-channel transformer architecture for 3DMolFormer, as illustrated in the left part of Figure 3. The module handling the token sequence is based on the GPT-2 model (Radford et al., 2019), featuring identical token embeddings, positional embeddings, multiple transformer layers, and a prediction head for logits. On top of this, we introduce a parallel numerical channel at both the input and output stages.
|
| 88 |
+
|
| 89 |
+
At the input stage, we multiply the embedding of each token in the token sequence with the corresponding value in the numerical sequence, using this product as the input to the positional embedding. This is why numerical values that lack meaningful information are padded with 1.0. At the output stage, in parallel with the token prediction head, we add a number head to predict the next floating-point value.
|
| 90 |
+
|
| 91 |
+
During inference with 3DMolFormer, the outputs are handled in two modes:
|
| 92 |
+
|
| 93 |
+
- Token Mode: In the drug design task, when predicting ligand SMILES tokens, the corresponding numerical output holds no meaningful value and is therefore padded with 1.0.
|
| 94 |
+
- Numerical Mode: In docking and drug design tasks, once the ligand SMILES is determined, the length of the 3D coordinate sequence and its tokens are also fixed. Therefore, the token output no longer holds meaningful information and is filled with the expected tokens (from ['x', 'y', 'z', 'LCS', 'LCE')). When the position corresponds to ['x', 'y', 'z'], the predicted floating-point values are appended to the input numerical sequence. For tokens corresponding to ['LCS', 'LCE'], the numerical values are also set to 1.0.
|
| 95 |
+
|
| 96 |
+
# 3.3 SELF-SUPERVISED PRE-TRAINING
|
| 97 |
+
|
| 98 |
+
To enable the 3DMolFormer model to learn the general patterns of pocket-ligand complex sequences, we conduct large-scale pre-training on 3D data, which includes three datasets: approximately 3.2M protein pockets, about 209M small molecule conformations, and around 167K pocket-ligand complexes. The first two datasets were collected by Uni-Mol (Zhou et al., 2023a) for large-scale pre-training on 3D protein pockets and small molecules, while the last dataset was generated by CrossDocked2020 (Francoeur et al., 2020).
|
| 99 |
+
|
| 100 |
+
In order for the dual-channel autoregressive model to capture both the token sequence format and the 3D coordinate patterns of pocket-ligand complexes, we adopt a composite loss function for the prediction of the next token and the corresponding numerical value. This loss function incorporates the cross-entropy (CE) loss for the whole token sequence and the mean squared error (MSE) loss for the numerical sequence corresponding to the 3D coordinates:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
L (\hat {s}, s) = \operatorname {C E} \left(\hat {s} _ {\mathrm {t o k}}, s _ {\mathrm {t o k}}\right) + \alpha \cdot \operatorname {M S E} \left(\hat {s} _ {\mathrm {n u m}} ^ {\mathrm {c o o r d}}, s _ {\mathrm {n u m}} ^ {\mathrm {c o o r d}}\right), \tag {2}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
where $\hat{s}$ represents the sequence predicted by 3DMolFormer, $s$ refers to the training data, and $\alpha$ is a coefficient that controls the balance between the CE loss and the MSE loss. This composite loss is applied to all of the three types of pre-training data.
|
| 107 |
+
|
| 108 |
+
Additionally, we employ large-batch training (Keskar et al., 2017) through gradient accumulation, which we found to be crucial for the pre-training stability of 3DMolFormer. For further details on pre-training and hyper-parameter settings, please refer to Section 4 and Appendix B.
|
| 109 |
+
|
| 110 |
+
# 3.4 FINE-TUNING
|
| 111 |
+
|
| 112 |
+
After the large-scale pre-training, we further fine-tune the 3DMolFormer model on two downstream drug discovery tasks: supervised fine-tuning for pocket-ligand docking, and reinforcement learning (RL) fine-tuning for pocket-aware drug design.
|
| 113 |
+
|
| 114 |
+
# 3.4.1 SUPERVISED FINE-TUNING FOR PROTEIN-LIGAND BINDING POSE PREDICTION
|
| 115 |
+
|
| 116 |
+
In the protein-ligand binding pose prediction (docking) task, as illustrated in Figure 3, each sample consists of a pocket-ligand complex. The input sequence contains the atoms of the protein pocket and their 3D coordinates, along with the SMILES sequence of the ligand. The output is the 3D coordinates of each atom in the ligand.
|
| 117 |
+
|
| 118 |
+
The pre-training data for 3DMolFormer already includes about $167\mathrm{K}$ pocket-ligand complexes from CrossDocked2020 (Francoeur et al., 2020); however, these complexes are generated using the docking software Smina Koes et al. (2013), which means that the docking performance of models trained with this data would not exceed that of Smina. To improve the upper limit of our model's docking performance, we fine-tune it on the experimentally determined PDBBind dataset Liu et al. (2017), which contains approximately $17\mathrm{K}$ ground-truth pocket-ligand complexes. Additionally, we employ a task-specific loss function that computes the mean squared error (MSE) loss only for the 3D coordinates of the ligand in the context of next numerical value prediction, since the inference process of docking operates entirely in numerical mode:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
L _ {\text {d o c k i n g}} \left(\hat {s} ^ {\text {l i g - c o o r d}}, s ^ {\text {l i g - c o o r d}}\right) = \operatorname {M S E} \left(\hat {s} _ {\text {n u m}} ^ {\text {l i g - c o o r d}}, s _ {\text {n u m}} ^ {\text {l i g - c o o r d}}\right). \tag {3}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
To mitigate overfitting during supervised fine-tuning, SMILES randomization (Arús-Pous et al., 2019) and random rotation of the 3D coordinates of complexes are used as data augmentation strategies. For further details on docking fine-tuning, please refer to Section 4.1 and Appendix C.
|
| 125 |
+
|
| 126 |
+
# 3.4.2 RL FINE-TUNING FOR POCKET-AWARE 3D DRUG DESIGN
|
| 127 |
+
|
| 128 |
+
In the pocket-aware drug design task, as illustrated in Figure 3, each sample is also a pocket-ligand pair. The input sequence includes the atoms of the protein pocket and their 3D coordinates, while the output consists of the ligand SMILES sequence and the 3D coordinates of its atoms.
|
| 129 |
+
|
| 130 |
+
Inspired by 1D RL-based molecular generation methods (Olivecrona et al., 2017), an RL agent with the 3DMolFormer architecture is initialized with the pre-trained weights, and a molecular property scoring function for each protein pocket is designed as the RL reward. Then, the agent is iteratively optimized to maximize the expected reward of its outputs. Specifically, at each RL step, the agent samples a batch of 3D ligands, and the regularized maximum likelihood estimation (MLE) loss (Svensson et al., 2023) of each ligand is computed and used to update the agent:
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
L _ {\text {d e s i g n}} (\hat {s} ^ {\mathrm {l i g}}) = \left(\log \pi_ {\text {p r e - t r a i n e d}} (\hat {s} _ {\text {t o k}} ^ {\text {l i g - s m i l e s}}) + \sigma \cdot R (m) - \log \pi_ {\text {a g e n t}} (\hat {s} _ {\text {t o k}} ^ {\text {l i g - s m i l e s}})\right) ^ {2}, \tag {4}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
where $\hat{s}^{\mathrm{lig}}$ ( $\hat{s}^{\mathrm{lig\_smiles}}$ and $\hat{s}^{\mathrm{lig\_coord}}$ ) is a sample generated by the RL agent, $m$ is the 3D molecule represented by $\hat{s}^{\mathrm{lig}}$ , and $R(\cdot)$ is reward function evaluating the property of the molecule. $\pi_{\mathrm{pre-trained}}(s)$ is the likelihood of the pre-trained 3DMolFormer model for generating the sequence $s$ , $\pi_{\mathrm{agent}}(s)$ is the corresponding likelihood of the agent model, and $\sigma$ is a coefficient hyper-parameter to control the importance of the reward. This loss function encourages the agent to generate molecules with higher expected rewards while retaining a low deviation from the pre-trained weights.
|
| 137 |
+
|
| 138 |
+
It is important to note that to leverage the duality of the two SBDD tasks, the sampling of ligand SMILES utilizes the weights of the RL agent's model, which are continuously updated during fine-tuning. In contrast, the generation of atomic 3D coordinates uses the weights from the model fine-tuned for docking, which remains unchanged during this process. For additional details on RL fine-tuning and hyper-parameter settings, please refer to Section 4.2 and Appendix D.
|
| 139 |
+
|
| 140 |
+
# 4 EXPERIMENTS
|
| 141 |
+
|
| 142 |
+
In this section, we present the results of two parts of experiments: pocket-ligand docking and pocket-aware 3D drug design. Through pre-training, the 3DMolFormer model is theoretically capable of being applied to the conformation generation of small molecules. However, as Zhou et al. (2023b) pointed out, the existing benchmarks for conformation generation are wrong; therefore, this experiment is not conducted.
|
| 143 |
+
|
| 144 |
+
Following the configuration of the GPT-2 small model (Radford et al., 2019), the 3DMolFormer model with a total of 92M parameters has 12 transformer layers, each containing 12 self-attention heads, and the embedding dimension is 768. The maximum length for the parallel sequences is set to 2048, which covers over $99\%$ of the samples in the training set as well as all samples in the test set for protein-ligand docking.
|
| 145 |
+
|
| 146 |
+
For pre-training, all samples with a coordinate range larger than 40 are screened out. Then, we replicate each protein pocket five times and each pocket-ligand complex twenty times, mixing them with small molecule conformations, resulting in a total of 228M training data samples. 3DMolFormer is pre-trained on this dataset for only one epoch, using a batch size of 10K implemented by gradient accumulation. The maximal learning rate is set to $5 \times 10^{-4}$ with a warmup period of $1\%$ steps followed by cosine decay. An AdamW optimizer Loshchilov & Hutter (2019) with a weight decay factor of 0.1 is employed, and the coefficient $\alpha$ in the loss function of Eq. (2) is set to 1.0. The pre-training process takes less than 48 hours with 4 A100 80G GPUs. For further details on the selection of hyper-parameters for pre-training, please refer to Appendix B.
|
| 147 |
+
|
| 148 |
+
# 4.1 PROTEIN-LIGAND BINDING POSE PREDICTION
|
| 149 |
+
|
| 150 |
+
Experiments of protein-ligand binding pose prediction are conducted in the targeted and semi-flexible docking scenario, where the protein pocket for binding is specified and fixed, while the ligand conformation is entirely flexible.
|
| 151 |
+
|
| 152 |
+
Table 1: Experimental results of 3DMolFormer, its variants, and other baselines on protein-ligand binding pose prediction, following the results reported in Uni-Mol (Zhou et al., 2023a). $(\uparrow) / (\downarrow)$ denotes that a higher / lower value is better. The best result in each column is bolded.
|
| 153 |
+
|
| 154 |
+
<table><tr><td>Methods</td><td>%<1.0Å (↑)</td><td>%<2.0Å (↑)</td><td>%<3.0Å (↑)</td><td>%<5.0Å (↑)</td><td>Avg. (↓)</td></tr><tr><td>AutoDock4</td><td>21.8</td><td>35.4</td><td>47.0</td><td>64.6</td><td>3.53</td></tr><tr><td>AutoDock Vina</td><td>44.2</td><td>64.6</td><td>73.7</td><td>84.6</td><td>2.37</td></tr><tr><td>Vinardo</td><td>41.8</td><td>62.8</td><td>69.8</td><td>76.8</td><td>2.49</td></tr><tr><td>Smina</td><td>47.4</td><td>65.3</td><td>74.4</td><td>82.1</td><td>1.84</td></tr><tr><td>Uni-Mol</td><td>43.2</td><td>80.4</td><td>87.0</td><td>94.0</td><td>1.62</td></tr><tr><td>3DMolFormer w/o PT</td><td>15.5</td><td>57.8</td><td>78.1</td><td>92.4</td><td>2.25</td></tr><tr><td>3DMolFormer w/o DA</td><td>10.3</td><td>51.0</td><td>74.9</td><td>91.6</td><td>2.45</td></tr><tr><td>3DMolFormer</td><td>43.8</td><td>84.9</td><td>96.4</td><td>98.8</td><td>1.29</td></tr></table>
|
| 155 |
+
|
| 156 |
+
Data Following Uni-Mol (Zhou et al., 2023a), we use PDBbind v2020 (Liu et al., 2017) as the training set for supervised fine-tuning on protein-ligand docking and CASF-2016 (Su et al., 2018) as the test set, which includes 285 test samples. In addition, we apply the same data filtering process as Uni-Mol to remove training samples with high similarity to the protein sequences or molecular structures of the complexes in the test set, which results in a training set comprising 18,404 ground-truth complexes.
|
| 157 |
+
|
| 158 |
+
Baselines We select four search-based methods: AutoDock4 (Morris et al., 2009), AutoDock Vina (Trott & Olson, 2010; Eberhardt et al., 2021), Vinardo (Quiroga & Villarreal, 2016), and Smina (Koes et al., 2013), along with Uni-Mol (Zhou et al., 2023a), which is currently the state-of-the-art deep learning method for targeted docking, as our baselines.
|
| 159 |
+
|
| 160 |
+
Ablation Studies Two variants of 3DMolFormer are established: (1) training a 3DMolFormer model from scratch on the fine-tuning set for protein-ligand docking without pre-training (w/o PT), and (2) fine-tuning the pre-trained 3DMolFormer model without data augmentation (w/o DA).
|
| 161 |
+
|
| 162 |
+
Evaluation The root mean square deviation (RMSD) between the predicted ligand pose and the ground truth is used to assess binding pose accuracy. Specifically, two metrics are employed: (1) the percentage of RMSD results that fall below predefined thresholds, with higher percentages indicating better performance, and (2) the average RMSD, where lower values are preferred.
|
| 163 |
+
|
| 164 |
+
Fine-tuning For supervised fine-tuning for pocket-ligand binding pose prediction, we train the model for 2000 epochs with a batch size of 128. The maximum learning rate is set to $1 \times 10^{-4}$ , with a warmup period of $1\%$ of the steps and cosine decay applied thereafter. The training process takes less than 24 hours with 4 A100 80G GPUs.
|
| 165 |
+
|
| 166 |
+
Results As shown in Table 1, 3DMolFormer outperforms all baselines in both average RMSD and the percentage of predictions with RMSD less than 2.0, 3.0, and $5.0\AA$ . Notably, it significantly surpasses other methods in the percentages for RMSD below 3.0 and $5.0\AA$ . This indicates that 3DMolFormer is less prone to making "large errors" compared to the baselines, reflecting its robustness. However, for the percentage of predictions with RMSD below $1.0\AA$ , the search-based method Smina outperforms the deep learning approaches, suggesting that there is still room for improvement in the ability of deep learning methods to capture the intricate interactions between protein pockets and ligands. Moreover, the ablation studies demonstrate that the pre-training and data augmentation both play a crucial role in the training of the 3DMolFormer docking model.
|
| 167 |
+
|
| 168 |
+
It is worth noting that, unlike all baseline methods, 3DMolFormer does not require an initialized 3D conformation of the ligand as input, indicating that the model has acquired the capability to predict small molecule 3D conformations through pre-training. This feature enhances the usability of 3DMolFormer compared to previous docking approaches.
|
| 169 |
+
|
| 170 |
+
Additionally, the average time taken by 3DMolFormer to predict a binding pose is 0.8 seconds using 1 A100 80G GPU, and this can be significantly accelerated through parallel inference. This suggests that 3DMolFormer has great potential for applications in large-scale virtual screening. For
|
| 171 |
+
|
| 172 |
+
further details and results of experiments on protein-ligand binding pose prediction, please refer to Appendix C.
|
| 173 |
+
|
| 174 |
+
# 4.2 POCKET-AWARE 3D DRUG DESIGN
|
| 175 |
+
|
| 176 |
+
In the experiments for pocket-aware 3D drug design, small molecule ligands and their 3D conformations are designed to bind well with a specified pocket on a protein whose structure remains fixed.
|
| 177 |
+
|
| 178 |
+
Data Following previous works (Peng et al., 2022; Guan et al., 2023a;b), we select 100 protein pockets from the CrossDocked2020 (Francoeur et al., 2020) dataset that exhibit low similarity (< $30\%$ ) to the protein sequences of pocket-ligand complexes used in pre-training, thereby establishing our targets for 3D drug design.
|
| 179 |
+
|
| 180 |
+
Baselines We compare 3DMolFormer against various baselines for pocket-aware 3D molecular generation, including AR (Luo et al., 2021), liGAN (Ragoza et al., 2022), GraphBP (Liu et al., 2022), Pocket2Mol (Peng et al., 2022), TargetDiff (Guan et al., 2023a), and DecompDiff (Guan et al., 2023b). Additionally, we report the results of the ligands corresponding to the 100 protein pockets in the CrossDocked2020 dataset for reference.
|
| 181 |
+
|
| 182 |
+
Evaluation In alignment with previous works, we evaluate 100 3D molecules generated for each protein pocket. Four metrics are selected to comprehensively assess the potential of generated molecules in practical drug design: (1) Vina Score, which directly estimates the binding affinity based on the generated 3D molecules; (2) Vina Dock, representing the best possible binding affinity of the molecules estimated by re-docking; (3) QED (Quantitative Estimate of Drug-likeness) (Bickerton et al., 2012); and (4) SA (Synthetic Accessibility) (Ertl & Schuffenhauer, 2009) $^{1}$ . We employ Quick Vina 2 (Alhossary et al., 2015) to estimate the binding affinity, which is an efficient alternative to AutoDock Vina. For all metrics, we report their average values across designed drug molecules for all protein pockets. Following Long et al. (2022) and Guan et al. (2023b), we also report the percentage of designed drug molecules meeting specific criteria: Vina Dock $< -8.18$ , QED $>0.25$ , and SA $>0.59$ . This percentage, referred to as the Success Rate, reflects the performance of different methods in multi-objective drug design, which is a common scenario in practical drug discovery.
|
| 183 |
+
|
| 184 |
+
Reward Function For the aforementioned drug design objectives, we formulate a composite reward function for the RL fine-tuning process $(R(m)$ in Eq. (4)). First, a reverse sigmoid function (Hu et al., 2023) is applied to transform the Vina Dock score into a range of $[0,1]$ , where higher values are preferable:
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
R _ {\text {D o c k}} (m) = 1 / \left(1 + 1 0 ^ {0. 6 2 5 \cdot (\text {V i n a D o c k} (m) + 1 0)}\right), \tag {5}
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
where $m$ refers to a small molecule.
|
| 191 |
+
|
| 192 |
+
Next, we utilize a step function for QED and SA, as these properties are auxiliary to the docking score; thus, they only need to exceed certain thresholds rather than aiming for higher values.
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
R _ {\mathrm {Q E D}} (m) = \mathbb {I} (\mathrm {Q E D} (m) > 0. 2 5), \quad R _ {\mathrm {S A}} (m) = \mathbb {I} (\mathrm {S A} (m) > 0. 5 9), \tag {6}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
where $\mathbb{I}(\cdot)$ represents the indicator function.
|
| 199 |
+
|
| 200 |
+
Finally, the mean of these three scores is employed as the RL reward function:
|
| 201 |
+
|
| 202 |
+
$$
|
| 203 |
+
R (m) = \frac {1}{3} \left[ R _ {\mathrm {D o c k}} (m) + R _ {\mathrm {Q E D}} (m) + R _ {\mathrm {S A}} (m) \right]. \tag {7}
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
This composite reward is also used as the multi-objective criteria for selecting drug candidates from all generated molecules.
|
| 207 |
+
|
| 208 |
+
Fine-tuning For the reinforcement learning fine-tuning aimed at pocket-aware 3D drug design, we execute 500 RL steps for each protein pocket, with a batch size of 128 and a constant learning rate of $1 \times 10^{-4}$ . The parameter $\sigma$ in Eq. (4) is set to 100. The RL process for each protein pocket takes less than 8 hours using 1 A100 80G GPU and 128 CPU cores, with the computation of the Vina Dock reward running in parallel on the CPU cores.
|
| 209 |
+
|
| 210 |
+
Table 2: Experimental results of 3DMolFormer and other baselines on pocket-aware 3D drug design, following the results reported in DecompDiff (Guan et al., 2023b). $(\uparrow) / (\downarrow)$ denotes that a higher / lower value is better. The best result in each column is bolded.
|
| 211 |
+
|
| 212 |
+
<table><tr><td>Methods</td><td>Vina Score (↓)</td><td>Vina Dock (↓)</td><td>QED (↑)</td><td>SA (↑)</td><td>Success Rate (↑)</td></tr><tr><td>Reference</td><td>-6.36</td><td>-7.45</td><td>0.48</td><td>0.73</td><td>25.0%</td></tr><tr><td>AR</td><td>-5.75</td><td>-6.75</td><td>0.51</td><td>0.63</td><td>7.1%</td></tr><tr><td>liGAN</td><td>-</td><td>-6.33</td><td>0.39</td><td>0.59</td><td>3.9%</td></tr><tr><td>GraphBP</td><td>-</td><td>-4.80</td><td>0.43</td><td>0.49</td><td>0.1%</td></tr><tr><td>Pocket2Mol</td><td>-5.14</td><td>-7.15</td><td>0.56</td><td>0.74</td><td>24.4%</td></tr><tr><td>TargetDiff</td><td>-5.47</td><td>-7.80</td><td>0.48</td><td>0.58</td><td>10.5%</td></tr><tr><td>DecompDiff</td><td>-5.67</td><td>-8.39</td><td>0.45</td><td>0.61</td><td>24.5%</td></tr><tr><td>3DMolFormer</td><td>-6.02</td><td>-9.48</td><td>0.49</td><td>0.78</td><td>85.3%</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Results As shown in Table 2, the molecules designed by 3DMolFormer outperform those of all baselines across four metrics: Vina Score, Vina Dock, SA, and Success Rate. Notably, it exhibits a significant advantage in Success Rate, becoming the first method to exceed the reference values provided in the dataset for this key metric. Additionally, the result of QED also significantly surpasses the predefined threshold. This indicates that 3DMolFormer demonstrates superior performance in binding affinity optimization and multi-objective joint optimization compared to existing 3D drug design methods, highlighting its strong potential for real-world applications in drug discovery.
|
| 215 |
+
|
| 216 |
+
For further details, results, and a case study of experiments on pocket-aware 3D drug design, please refer to Appendix D.
|
| 217 |
+
|
| 218 |
+
# 5 CONCLUSION AND DISCUSSION
|
| 219 |
+
|
| 220 |
+
In this paper, we introduce 3DMolFormer for structure-based drug discovery, a dual-channel transformer-based framework designed to process parallel sequences of tokens and numerical values representing pocket-ligand complexes. Through self-supervised large-scale pre-training and supervised fine-tuning, 3DMolFormer can accurately and efficiently predict the binding poses of ligands to protein pockets. Furthermore, through reinforcement learning fine-tuning, 3DMolFormer can generate drug candidates that exhibit high binding affinities for a given protein target, along with favorable drug-likeness and synthesizeability. Above all, 3DMolFormer is the first machine learning framework that can simultaneously address both protein-ligand docking and pocket-aware 3D drug design, and it outperforms previous baselines in both tasks.
|
| 221 |
+
|
| 222 |
+
It is noteworthy that many recent deep learning models for 3D molecules, such as Uni-Mol, Pocket2Mol, TargetDiff, and DecompDiff, which serve as baselines in our experiments, adhere to the concept of "equivariance" introduced by geometric deep learning (Atz et al., 2021; Isert et al., 2023a). However, the 3DMolFormer model does not explicitly enforce SE(3)-symmetry. It appears that through the normalization of 3D coordinates and random rotations during data augmentation, 3DMolFormer has acquired the SE(3)-equivariance by training on a sufficiently large and diverse dataset. This approach aligns with recent successful methods in the field, including AlphaFold3 (Abramson et al., 2024), which also does not rely on SE(3)-equivariant architectures.
|
| 223 |
+
|
| 224 |
+
Admittedly, our approach still has some limitations. First, 3DMolFormer does not account for the flexibility of proteins during ligand binding, which may affect the accuracy of subsequent binding affinity prediction. Second, protein-ligand binding is a dynamic process, but 3DMolFormer struggles to capture this dynamism effectively. Finally, 3DMolFormer does not consider environmental factors such as temperature and pH, which can significantly influence the 3D conformation of the binding complex. These issues represent core challenges in current computational methods for structure-based drug discovery, and we look forward to future work addressing these limitations. Furthermore, the implementation details in 3DMolFormer have the potential to be further optimized, for example, advanced methods of multi-objective reinforcement learning (Liu et al., 2014) may be introduced into the drug design process.
|
| 225 |
+
|
| 226 |
+
# REFERENCES
|
| 227 |
+
|
| 228 |
+
Josh Abramson, Jonas Adler, Jack Dunger, Richard Evans, Tim Green, Alexander Pritzel, Olaf Ronneberger, Lindsay Willmore, Andrew J Ballard, Joshua Bambrick, et al. Accurate structure prediction of biomolecular interactions with alphafold 3. Nature, pp. 1-3, 2024.
|
| 229 |
+
Sungsoo Ahn, Junsu Kim, Hankook Lee, and Jinwoo Shin. Guiding deep molecular optimization with genetic exploration. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 12008-12021. Curran Associates, Inc., 2020.
|
| 230 |
+
Eric Alcaide, Zhifeng Gao, Guolin Ke, Yaqi Li, Linfeng Zhang, Hang Zheng, and Gengmo Zhou. Uni-mol docking v2: Towards realistic and accurate binding pose prediction. arXiv preprint arXiv:2405.11769, 2024.
|
| 231 |
+
Amr Alhossary, Stephanus Daniel Handoko, Yuguang Mu, and Chee-Keong Kwoh. Fast, accurate, and reliable molecular docking with quickvina 2. Bioinformatics, 31(13):2214-2216, 2015.
|
| 232 |
+
Josep Arús-Pous, Simon Viet Johansson, Oleksii Prykhodko, Esben Jannik Bjerrum, Christian Tyrchan, Jean-Louis Reymond, Hongming Chen, and Ola Engkvist. Randomized smiles strings improve the quality of molecular generative models. Journal of cheminformatics, 11(1):1-13, 2019.
|
| 233 |
+
Kenneth Atz, Francesca Grisoni, and Gisbert Schneider. Geometric deep learning on molecular representations. Nature Machine Intelligence, 3(12):1023-1032, 2021.
|
| 234 |
+
Minkyung Baek, Frank DiMaio, Ivan Anishchenko, Justas Dauparas, Sergey Ovchinnikov, Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch, R Dustin Schaeffer, et al. Accurate prediction of protein structures and interactions using a three-track neural network. Science, 373(6557):871-876, 2021.
|
| 235 |
+
Viraj Bagal, Rishal Aggarwal, PK Vinod, and U Deva Priyakumar. Molgpt: molecular generation using a transformer-decoder model. Journal of Chemical Information and Modeling, 62(9):2064-2076, 2021.
|
| 236 |
+
Mostapha Benhenda. Can ai reproduce observed chemical diversity? bioRxiv, pp. 292177, 2018.
|
| 237 |
+
G Richard Bickerton, Gaia V Paolini, Jérémy Besnard, Sorel Muresan, and Andrew L Hopkins. Quantifying the chemical beauty of drugs. Nature chemistry, 4(2):90-98, 2012.
|
| 238 |
+
Alexandre Blanco-Gonzalez, Alfonso Cabezon, Alejandro Seco-Gonzalez, Daniel Conde-Torres, Paula Antelo-Riveiro, Angel Pineiro, and Rebecca Garcia-Fandino. The role of ai in drug discovery: challenges, opportunities, and strategies. *Pharmaceuticals*, 16(6):891, 2023.
|
| 239 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pp. 1877-1901, 2020.
|
| 240 |
+
Martin Buttenschoen, Garrett M Morris, and Charlotte M Deane. Posebusters: Ai-based docking methods fail to generate physically valid poses or generalise to novel sequences. Chemical Science, 15(9):3130-3139, 2024.
|
| 241 |
+
Can Chen, Jingbo Zhou, Fan Wang, Xue Liu, and Dejing Dou. Structure-aware protein self-supervised learning. Bioinformatics, 2023.
|
| 242 |
+
Gabriele Corso, Hannes Stärk, Bowen Jing, Regina Barzilay, and Tommi Jaakkola. Diffdock: Diffusion steps, twists, and turns for molecular docking. In International Conference on Learning Representations (ICLR), 2023.
|
| 243 |
+
|
| 244 |
+
Gabriele Corso, Arthur Deng, Nicholas Polizzi, Regina Barzilay, and Tommi Jaakkola. Deep confident steps to new pockets: Strategies for docking generalization. In International Conference on Learning Representations (ICLR), 2024.
|
| 245 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186, 2019.
|
| 246 |
+
Chai Discovery, Jacques Boitreaud, Jack Dent, Matthew McPartlon, Joshua Meier, Vinicius Reis, Alex Rogozhnikov, and Kevin Wu. Chai-1: Decoding the molecular interactions of life. bioRxiv, pp. 2024–10, 2024.
|
| 247 |
+
Yuanqi Du, Tianfan Fu, Jimeng Sun, and Shengchao Liu. Molgensurvey: A systematic survey in machine learning models for molecule design. arXiv preprint arXiv:2203.14500, 2022.
|
| 248 |
+
Yuanqi Du, Arian R Jamasb, Jeff Guo, Tianfan Fu, Charles Harris, Yingheng Wang, Chenru Duan, Pietro Lio, Philippe Schwaller, and Tom L Blundell. Machine learning-aided generative molecular design. Nature Machine Intelligence, pp. 1-16, 2024.
|
| 249 |
+
Jerome Eberhardt, Diogo Santos-Martins, Andreas F Tillack, and Stefano Forli. Autodock vina 1.2.0: New docking methods, expanded force field, and python bindings. Journal of chemical information and modeling, 61(8):3891-3898, 2021.
|
| 250 |
+
Peter Eckmann, Kunyang Sun, Bo Zhao, Mudong Feng, Michael K Gilson, and Rose Yu. Limo: Latent inceptionism for targeted molecule generation. In International Conference on Machine Learning. PMLR, 2022.
|
| 251 |
+
Peter Ertl and Ansgar Schuffenhauer. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. Journal of cheminformatics, 1(1):1-11, 2009.
|
| 252 |
+
Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. Nature Machine Intelligence, 4(2):127-134, 2022.
|
| 253 |
+
Shikun Feng, Minghao Li, Yinjun Jia, Weiying Ma, and Yanyan Lan. Protein-ligand binding representation learning from fine-grained interactions. In International Conference on Learning Representations, 2024a.
|
| 254 |
+
Wei Feng, Lvwei Wang, Zaiyun Lin, Yanhao Zhu, Han Wang, Jianqiang Dong, Rong Bai, Huting Wang, Jielong Zhou, Wei Peng, et al. Generation of 3d molecules in pockets via a language model. Nature Machine Intelligence, 6(1):62-73, 2024b.
|
| 255 |
+
Daniel Flam-Shepherd and Alán Aspuru-Guzik. Language models can generate molecules, materials, and protein binding sites directly in three dimensions as xyz, cif, and pdb files. arXiv preprint arXiv:2305.05708, 2023.
|
| 256 |
+
Paul G Francoeur, Tomohide Masuda, Jocelyn Sunseri, Andrew Jia, Richard B Iovanisci, Ian Snyder, and David R Koes. Three-dimensional convolutional neural networks and a cross-docked data set for structure-based drug design. Journal of chemical information and modeling, 60(9):4200-4215, 2020.
|
| 257 |
+
Tianfan Fu, Wenhao Gao, Connor Coley, and Jimeng Sun. Reinforced genetic algorithm for structure-based drug design. Advances in Neural Information Processing Systems, 35:12325-12338, 2022.
|
| 258 |
+
Bowen Gao, Bo Qiang, Haichuan Tan, Yinjun Jia, Minsi Ren, Minsi Lu, Jingjing Liu, Wei-Ying Ma, and Yanyan Lan. Drugclip: Contrasive protein-molecule representation learning for virtual screening. Advances in Neural Information Processing Systems, 36, 2024a.
|
| 259 |
+
Bowen Gao, Minsi Ren, Yuyan Ni, Yanwen Huang, Bo Qiang, Zhi-Ming Ma, Wei-Ying Ma, and Yanyan Lan. Rethinking specificity in sbdd: Leveraging delta score and energy-guided diffusion. arXiv preprint arXiv:2403.12987, 2024b.
|
| 260 |
+
|
| 261 |
+
Siavash Golkar, Mariel Pettee, Michael Eickenberg, Alberto Bietti, Miles Cranmer, Geraud Krawezik, Francois Lanusse, Michael McCabe, Ruben Ohana, Liam Parker, et al. xval: A continuous number encoding for large language models. arXiv preprint arXiv:2310.02989, 2023.
|
| 262 |
+
Jiaqi Guan, Wesley Wei Qian, Xingang Peng, Yufeng Su, Jian Peng, and Jianzhu Ma. 3d equivariant diffusion for target-aware molecule generation and affinity prediction. In International Conference on Learning Representations, 2023a.
|
| 263 |
+
Jiaqi Guan, Xiangxin Zhou, Yuwei Yang, Yu Bao, Jian Peng, Jianzhu Ma, Qiang Liu, Liang Wang, and Quanquan Gu. Decompdiff: diffusion models with decomposed priors for structure-based drug design. In International Conference on Machine Learning, 2023b.
|
| 264 |
+
Charles Harris, Kieran Didi, Arian R Jamasb, Chaitanya K Joshi, Simon V Mathis, Pietro Lio, and Tom Blundell. Benchmarking generated poses: How rational is structure-based drug design with generative models? arXiv preprint arXiv:2308.07413, 2023.
|
| 265 |
+
Xiuyuan Hu, Guoqing Liu, Yang Zhao, and Hao Zhang. De novo drug design using reinforcement learning with multiple gpt agents. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
|
| 266 |
+
Xiuyuan Hu, Guoqing Liu, Quanming Yao, Yang Zhao, and Hao Zhang. Hamiltonian diversity: effectively measuring molecular diversity by shortest hamiltonian circuits. Journal of Cheminformatics, 16(1):94, 2024.
|
| 267 |
+
Ross Irwin, Spyridon Dimitriadis, Jiazhen He, and Esben Jannik Bjerrum. Chemformer: a pretrained transformer for computational chemistry. Machine Learning: Science and Technology, 3 (1):015022, 2022.
|
| 268 |
+
Clemens Isert, Kenneth Atz, and Gisbert Schneider. Structure-based drug design with geometric deep learning. Current Opinion in Structural Biology, 79:102548, 2023a.
|
| 269 |
+
Clemens Isert, Kenneth Atz, and Gisbert Schneider. Structure-based drug design with geometric deep learning. Current Opinion in Structural Biology, 79:102548, 2023b.
|
| 270 |
+
Dejun Jiang, Chang-Yu Hsieh, Zhenxing Wu, Yu Kang, Jike Wang, Ercheng Wang, Ben Liao, Chao Shen, Lei Xu, Jian Wu, et al. Interactiongraphnet: A novel and efficient deep graph representation learning framework for accurate protein-ligand interaction predictions. Journal of medicinal chemistry, 64(24):18209-18232, 2021.
|
| 271 |
+
Wengong Jin, Regina Barzilay, and T. Jaakkola. Multi-objective molecule generation using interpretable substructures. In International Conference on Machine Learning, pp. 4849-4859. PMLR, 2020.
|
| 272 |
+
Zygimentas Jocys, Joanna Grundy, and Katayoun Farrahi. Drugpose: benchmarking 3d generative methods for early stage drug discovery. Digital Discovery, 2024.
|
| 273 |
+
John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. nature, 596(7873):583-589, 2021.
|
| 274 |
+
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017.
|
| 275 |
+
David Ryan Koes, Matthew P Baumgartner, and Carlos J Camacho. Lessons learned in empirical scoring with smina from the csar 2011 benchmarking exercise. Journal of chemical information and modeling, 53(8):1893-1904, 2013.
|
| 276 |
+
Seul Lee, Seanie Lee, Kenji Kawaguchi, and Sung Ju Hwang. Drug discovery with dynamic goal-aware fragments. Proceedings of the 41th International Conference on Machine Learning, 2024.
|
| 277 |
+
Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Nikita Smetanin, Robert Verkuil, Ori Kabeli, Yaniv Shmueli, et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science, 379(6637):1123-1130, 2023.
|
| 278 |
+
|
| 279 |
+
Chunming Liu, Xin Xu, and Dewen Hu. Multi-objective reinforcement learning: A comprehensive overview. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(3):385-398, 2014.
|
| 280 |
+
Meng Liu, Youzhi Luo, Kanji Uchino, Koji Maruhashi, and Shuiwang Ji. Generating 3d molecules for target protein binding. In International Conference on Machine Learning, 2022.
|
| 281 |
+
Zhihai Liu, Minyi Su, Li Han, Jie Liu, Qifan Yang, Yan Li, and Renxiao Wang. Forging the basis for developing protein-ligand interaction scoring functions. Accounts of chemical research, 50 (2):302-309, 2017.
|
| 282 |
+
Siyu Long, Yi Zhou, Xinyu Dai, and Hao Zhou. Zero-shot 3d drug design by sketching and generating. Advances in Neural Information Processing Systems, 35:23894-23907, 2022.
|
| 283 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
|
| 284 |
+
Wei Lu, Qifeng Wu, Jixian Zhang, Jiahua Rao, Chengtao Li, and Shuangjia Zheng. Tankbind: Trigonometry-aware neural networks for drug-protein binding structure prediction. Advances in neural information processing systems, 35:7236-7249, 2022.
|
| 285 |
+
Shitong Luo, Jiaqi Guan, Jianzhu Ma, and Jian Peng. A 3d generative model for structure-based drug design. Advances in Neural Information Processing Systems, 34:6229-6239, 2021.
|
| 286 |
+
Kit-Kay Mak, Yi-Hang Wong, and Mallikarjuna Rao Pichika. Artificial intelligence in drug discovery and development. *Drug Discovery and Evaluation: Safety and Pharmacokinetic Assays*, pp. 1-38, 2023.
|
| 287 |
+
Oscar Méndez-Lucio, Mazen Ahmad, Ehecatl Antonio del Rio-Chanona, and Jörg Kurt Wegner. A geometric deep learning approach to predict binding conformations of bioactive molecules. Nature Machine Intelligence, 3(12):1033-1039, 2021.
|
| 288 |
+
Alex Morehead, Nabin Giri, Jian Liu, and Jianlin Cheng. Deep learning for protein-ligand docking: Are we there yet? arXiv preprint arXiv:2405.14108, 2024.
|
| 289 |
+
Garrett M Morris, Ruth Huey, William Lindstrom, Michel F Sanner, Richard K Belew, David S Goodsell, and Arthur J Olson. Autodock4 and autodocktools4: Automated docking with selective receptor flexibility. Journal of computational chemistry, 30(16):2785-2791, 2009.
|
| 290 |
+
Marcus Olivecrona, Thomas Blaschke, Ola Engkvist, and Hongming Chen. Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1):1-14, 2017.
|
| 291 |
+
Daniele Pala and David E Clark. Caught between a rock and a hard place: current challenges in structure-based drug design. *Drug Discovery Today*, pp. 104106, 2024.
|
| 292 |
+
Qizhi Pei, Kaiyuan Gao, Lijun Wu, Jinhua Zhu, Yingce Xia, Shufang Xie, Tao Qin, Kun He, TieYan Liu, and Rui Yan. Fabind: Fast and accurate protein-ligand binding. Advances in Neural Information Processing Systems, 36, 2024.
|
| 293 |
+
Xingang Peng, Shitong Luo, Jiaqi Guan, Qi Xie, Jian Peng, and Jianzhu Ma. Pocket2mol: Efficient molecular sampling based on 3d protein pockets. In International Conference on Machine Learning, 2022.
|
| 294 |
+
Joanne Quinn, Joanne McEachen, Michael Fullan, Mag Gardner, and Max Drummy. *Dive into deep learning: Tools for engagement*. Corwin Press, 2019.
|
| 295 |
+
Rodrigo Quiroga and Marcos A Villarreal. Vinardo: A scoring function based on autodock vina improves scoring, docking, and virtual screening. PloS one, 11(5):e0155183, 2016.
|
| 296 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
|
| 297 |
+
Matthew Ragoza, Tomohide Masuda, and David Ryan Koes. Generating 3d molecules conditional on receptor binding sites with deep generative models. Chemical science, 13(9):2701-2713, 2022.
|
| 298 |
+
|
| 299 |
+
Arne Schneuing, Yuanqi Du, Charles Harris, Arian Jamasb, Ilia Igashov, Weitao Du, Tom Blundell, Pietro Lio, Carla Gomes, Max Welling, et al. Structure-based drug design with equivariant diffusion models. arXiv preprint arXiv:2210.13695, 2022.
|
| 300 |
+
Philippe Schwaller, Teodoro Laino, Théophile Gaudin, Peter Bolgar, Christopher A Hunter, Costas Bekas, and Alpha A Lee. Molecular transformer: a model for uncertainty-calibrated chemical reaction prediction. ACS central science, 5(9):1572-1583, 2019.
|
| 301 |
+
Marwin HS Segler, Thierry Kogej, Christian Tyrchan, and Mark P Waller. Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS central science, 4(1): 120-131, 2018.
|
| 302 |
+
Gregor Simm, Robert Pinsler, and José Miguel Hernández-Lobato. Reinforcement learning for molecular design guided by quantum mechanics. In International Conference on Machine Learning, pp. 8959-8969. PMLR, 2020.
|
| 303 |
+
Jacob O Spiegel and Jacob D Durrant. Autogrow4: an open-source genetic algorithm for de novo drug design and lead optimization. Journal of cheminformatics, 12:1-16, 2020.
|
| 304 |
+
Hannes Stärk, Octavian Ganea, Lagnajit Pattanaik, Regina Barzilay, and Tommi Jaakkola. Equibind: Geometric deep learning for drug binding structure prediction. In International conference on machine learning, pp. 20503-20521. PMLR, 2022.
|
| 305 |
+
Minyi Su, Qifan Yang, Yu Du, Guoqin Feng, Zhihai Liu, Yan Li, and Renxiao Wang. Comparative assessment of scoring functions: the casf-2016 update. Journal of chemical information and modeling, 59(2):895–913, 2018.
|
| 306 |
+
Hampus Gummesson Svensson, Christian Tyrchan, Ola Engkvist, and Morteza Haghir Chehreghani. Utilizing reinforcement learning for de novo drug design. arXiv preprint arXiv:2303.17615, 2023.
|
| 307 |
+
Oleg Trott and Arthur J Olson. Autodock vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. Journal of computational chemistry, 31(2):455-461, 2010.
|
| 308 |
+
Rob LM Van Montfort and Paul Workman. Structure-based drug design: aiming for a perfect fit. Essays in biochemistry, 61(5):431-437, 2017.
|
| 309 |
+
Debby D Wang, Wenhui Wu, and Ran Wang. Structure-based, deep-learning models for protein-ligand binding affinity prediction. Journal of Cheminformatics, 16(1):2, 2024a.
|
| 310 |
+
Jike Wang, Rui Qin, Mingyang Wang, Meijing Fang, Yangyang Zhang, Yuchen Zhu, Qun Su, Qiaolin Gou, Chao Shen, Odin Zhang, et al. Token-mol 1.0: Tokenized drug design with large language model. arXiv preprint arXiv:2407.07930, 2024b.
|
| 311 |
+
Yaqing Wang, Abulikemu Abuduweili, Quanming Yao, and Dejing Dou. Property-aware relation networks for few-shot molecular property prediction. Advances in Neural Information Processing Systems, 34:17441-17454, 2021.
|
| 312 |
+
Oliver Wieder, Stefan Kohlbacher, Méline Kuenemann, Arthur Garon, Pierre Ducrot, Thomas Seidel, and Thierry Langer. A compact review of molecular property prediction with graph neural networks. *Drug Discovery Today: Technologies*, 37:1-12, 2020.
|
| 313 |
+
Jun Xia, Yanqiao Zhu, Yuanqi Du, Yue Liu, and Stan Z Li. A systematic survey of chemical pretrained models. International Joint Conference on Artificial Intelligence, 2023.
|
| 314 |
+
Chao Yang, Eric Anthony Chen, and Yingkai Zhang. Protein-ligand docking in the machine-learning era. *Molecules*, 27(14):4568, 2022.
|
| 315 |
+
Shuwen Yang, Ziyao Li, Guojie Song, and Lingsheng Cai. Deep molecular representation learning via fusing physical and chemical information. Advances in Neural Information Processing Systems, 34:16346-16357, 2021a.
|
| 316 |
+
Soojung Yang, Doyeong Hwang, Seul Lee, Seongok Ryu, and Sung Ju Hwang. Hit and lead discovery with explorative rl and fragment-based molecule generation. Advances in Neural Information Processing Systems, 34:7924-7936, 2021b.
|
| 317 |
+
|
| 318 |
+
Jiaxuan You, Bowen Liu, Zhitao Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. Advances in neural information processing systems, 31, 2018.
|
| 319 |
+
Yangtian Zhang, Huiyu Cai, Chence Shi, Bozitao Zhong, and Jian Tang. E3bind: An end-to-end equivariant network for protein-ligand docking. In International Conference on Learning Representations (ICLR), 2023a.
|
| 320 |
+
Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Chee-Kong Lee. Motif-based graph self-supervised learning for molecular property prediction. Advances in Neural Information Processing Systems, 34:15870-15882, 2021.
|
| 321 |
+
Zaixi Zhang, Jiaxian Yan, Qi Liu, Enhong Chen, and Marinka Zitnik. A systematic survey in geometric deep learning for structure-based drug design. arXiv preprint arXiv:2306.11768, 2023b.
|
| 322 |
+
Lingling Zhao, Yan Zhu, Junjie Wang, Naifeng Wen, Chunyu Wang, and Liang Cheng. A brief review of protein-ligand interaction prediction. Computational and Structural Biotechnology Journal, 20:2831-2838, 2022.
|
| 323 |
+
Kangyu Zheng, Yingzhou Lu, Zaixi Zhang, Zhongwei Wan, Yao Ma, Marinka Zitnik, and Tianfan Fu. Structure-based drug design benchmark: Do 3d methods really dominate? arXiv preprint arXiv:2406.03403, 2024.
|
| 324 |
+
Artem Zholus, Maksim Kuznetsov, Roman Schutski, Rim Shayakhmetov, Daniil Polykovskiy, Sarath Chandar, and Alex Zhavoronkov. Bindgpt: A scalable framework for 3d molecular design via language modeling and reinforcement learning. arXiv preprint arXiv:2406.03686, 2024.
|
| 325 |
+
Gengmo Zhou, Zhifeng Gao, Qiankun Ding, Hang Zheng, Hongteng Xu, Zhewei Wei, Linfeng Zhang, and Guolin Ke. Uni-mol: A universal 3d molecular representation learning framework. In International Conference on Learning Representations, 2023a.
|
| 326 |
+
Gengmo Zhou, Zhifeng Gao, Zhewei Wei, Hang Zheng, and Guolin Ke. Do deep learning methods really perform better in molecular conformation generation? arXiv preprint arXiv:2302.07061, 2023b.
|
| 327 |
+
Jinhua Zhu, Yingce Xia, Lijun Wu, Shufang Xie, Tao Qin, Wengang Zhou, Houqiang Li, and Tie-Yan Liu. Unified 2d and 3d pre-training of molecular representations. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining, pp. 2626-2636, 2022.
|
| 328 |
+
|
| 329 |
+
# A PARALLEL SEQUENCES
|
| 330 |
+
|
| 331 |
+
Here is a real example of the parallel sequence of a pocket-ligand complex with a total length of 867, corresponding to Figure 1 and 2. The token sequence mainly consists of 4 parts: pocket atoms, pocket atom coordinates, ligand SMILES, ligand atom coordinates, and the start and end of each part are marked by special tokens. The first amino acid of the pocket is particularly marked. Moreover, the '[x}', '[y}', '[z]' tokens are corresponding to values representing 3D coordinates in the numerical sequence.
|
| 332 |
+
|
| 333 |
+
<table><tr><td>Tokens: [ [PS]', N', CA', C', O', C', C', C', N', CA', C', O', C', C', N', CA', C', O', C', C', N', CA', C', O', C', C', N', CA', C', O', C', C', N', CA', C', O', C', C', N', CA', C', O', C', C', N', CA', C', O', C', C', N', CA', C', O', C', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', C', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', O', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA', C', O', C', N', CA',C', O', C', N', CA', C'</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [y], [z], [x], [y], [z], [x], [y], [z], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], (x), [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[x], [y], [z], [x], [y], [z], [x], [y]</td><td></td></tr><tr><td>[z], [x], [y], [z], [x], [y], [z], [x], [y]</td><td>[PCE'], [LS'],</td></tr><tr><td>[C', C', C', C', (' ', 'O', ''), 'N', 'c', '1', 'c', 'c', (' ', 'S', (' ', 'N', ''), (' ', 'O', ''), 'C', ['LE'], ['LCS'], ['x'], ['z'],</td><td></td></tr></table>
|
| 334 |
+
|
| 335 |
+
'[x],'[y],'[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x],[y],[z],[x]
|
| 336 |
+
|
| 337 |
+
<table><tr><td>Numbers: [1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.11.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1</td><td></td><td></td></tr><tr><td>1.411 -0.114 0.217 1.242 0.118 0.107 1.106 0.083 -0.099 1.655 -0.181 0.045 1.693 -0.462 0.086</td><td></td><td></td></tr><tr><td>1.418 -0.577 0.076 -0.424 0.62 1.849 -0.673 0.475 1.784 -0.824 0.383 2.028 -0.714 0.248 2.201</td><td></td><td></td></tr><tr><td>-0.619 0.224 1.609 -0.881 0.084 1.542 -0.463 0.299 1.344 -1.458 -0.708 -0.818 -1.275 -0.894 -0.94</td><td></td><td></td></tr><tr><td>-1.285 -1.149 -0.778 -1.413 -1.159 -0.568 -0.986 -0.789 -0.949 -0.871 -0.739 -0.676 -0.726 -0.929</td><td></td><td></td></tr><tr><td>-0.54 -0.866 -0.515 -0.525 -0.637 -0.826 -0.313 -0.723 -0.575 -0.298 -0.515 -2.414 0.742 -0.313</td><td></td><td></td></tr><tr><td>-2.275 0.905 -0.378 -2.308 1.201 -0.227 -2.425 1.35 -0.31 -1.984 0.824 -0.231 -1.956 0.529 -0.09</td><td></td><td></td></tr><tr><td>-1.698 0.47 -0.209 -1.479 0.513 0.143 -1.706 0.383 -1.755 0.086 0.046 -1.589 0.29 0.172 -1.731</td><td></td><td></td></tr><tr><td>0.562 0.179 -1.814 0.664 -0.031 -1.318 0.318 0.019 -1.14 0.526 0.148 -1.175 0.045 -0.011 -0.174</td><td></td><td></td></tr><tr><td>1.928 0.171 -0.047 1.719 0.083 -0.128 1.437 -0.074 -0.315 1.403 0.465 -0.129 1.757 0.514 -0.4 1.67</td><td></td><td></td></tr><tr><td>0.621 -0.628 0.417 0.846 -0.613 0.228 1.008 -0.363 0.288 0.909 -0.172 0.409 0.755 -0.591 -0.066</td><td></td><td></td></tr><tr><td>0.623 -0.345 -0.106 0.566 -0.814 -0.142 -1.305 -0.168 2.133 -1.388 -0.408 1.991 -1.66 -0.492 2.095</td><td></td><td></td></tr><tr><td>-1.703 -0.508 2.335 -1.183 -0.637 2.028 -0.97 -0.61 1.832 -0.729 -0.497 1.871 -0.981 -0.692 1.556</td><td></td><td></td></tr><tr><td>-0.588 -0.495 1.631 -0.736 -0.62 1.438 -1.174 -0.821 1.399 -0.679 -0.674 1.169 -1.117 -0.877 1.134</td><td></td><td></td></tr><tr><td>-0.873 -0.801 1.022 -0.812 -1.891 -0.518 -0.571 -2.055 -0.494 -0.657 -2.345 -0.518 -0.869 -2.419</td><td></td><td></td></tr><tr><td>-0.416 -0.439 -2.009 -0.222 -0.382 -1.72 -0.162 -0.133 -1.605 -0.2 -0.544 -1.519 -0.08 -0.143 -1.349</td><td></td><td></td></tr><tr><td>-0.135 -0.392 -1.289 -0.072 0.109 -0.508 -2.274 0.335 -0.345 -2.184 0.588 -0.505 -2.131 0.786 -</td><td></td><td></td></tr><tr><td>-0.399 -2.029 0.262 -0.179 -1.937 0.193 -0.35 -1.699 0.254 -0.585 -1.691 0.056 -0.232 -1.505 0.249</td><td></td><td></td></tr><tr><td>-0.548 0.808 0.382 -0.809 0.806 0.581 -0.839 0.578 0.694 -1.052 0.553 0.189 -1.044 0.78 0.07</td><td></td><td></td></tr><tr><td>-1.033 0.52 -0.029 -1.041 0.999 1.244 0.351 0.238 1.426 0.432 0.457 1.353 0.35 0.742 1.477 0.45</td><td></td><td></td></tr><tr><td>0.926 1.424 0.737 0.433 1.153 0.801 0.335 1.082 0.581 0.142 -0.177 1.888 0.87 0.004 1.664 0.936</td><td></td><td></td></tr><tr><td>0.224 1.742 1.135 0.36 1.565 1.242 0.132 1.522 0.692 -0.081 1.397 0.519 0.304 1.71 0.529 -1.008</td><td></td><td></td></tr><tr><td>1.286 1.446 -1.146 1.092 1.27 -1.317 0.904 1.433 -1.261 0.856 1.667 -0.944 0.926 1.111 -0.747</td><td></td><td></td></tr><tr><td>1.057 0.919 -0.636 0.849 0.724 -0.832 1.303 0.773 -1.73 0.636 -0.746 -1.678 0.408 -0.922 -1.728</td><td></td><td></td></tr><tr><td>0.154 -0.766 -1.731 0.16 -0.52 -1.389 0.408 -1.035 -1.167 0.434 -0.826 -1.025 0.176 -0.756 -1.137</td><td></td><td></td></tr><tr><td>-0.044 -0.769 -0.777 0.206 -0.67 -1.603 -1.141 -0.028 -1.559 -0.893 0.123 -1.694 -0.668 -0.013</td><td></td><td></td></tr><tr><td>-1.614 -0.591 -0.235 -1.259 -0.833 0.14 -1.116 -1.027 0.314 -0.843 -1.072 0.304 -1.214 -1.193 0.506</td><td></td><td></td></tr><tr><td>-0.78 -1.259 0.481 -1.002 -1.336 0.604 0.647 -1.02 -1.743 0.624 -1.167 -1.485 0.346 -1.28 -1.438</td><td></td><td></td></tr><tr><td>0.319 -1.491 -1.312 0.727 -1.026 -1.229 0.706 -0.729 -1.225 0.506 -0.597 -1.088 0.872 -0.537</td><td></td><td></td></tr><tr><td>-1.32 0.543 -0.337 -1.111 0.763 -0.295 -1.251 -0.888 -0.494 -1.826 -1.057 -0.313 -1.668 -1.3 -0.231</td><td></td><td></td></tr><tr><td>-1.836 -1.271 -0.142 -2.065 -0.913 -0.058 -1.581 -0.693 -0.102 -1.386 -0.672 -0.316 -1.267 -0.52</td><td></td><td></td></tr><tr><td>0.1 -1.351 -0.84 2.384 -0.05 -0.733 2.123 0.03 -0.443 2.139 0.11 -0.361 2.014 0.306 -0.766 1.927</td><td></td><td></td></tr><tr><td>-0.202 -0.663 1.652 -0.146 -0.809 1.472 0.01 -0.418 1.569 -0.244 -0.718 1.22 0.063 -0.324 1.316</td><td></td><td></td></tr><tr><td>-0.193 -0.478 1.138 -0.037 0.189 0.013 1.237 0.194 -0.096 0.964 0.344 -0.358 0.967 0.548 -0.396</td><td></td><td></td></tr><tr><td>1.111 0.343 0.095 0.776 0.232 0.378 0.764 0.407 0.536 0.569 -0.061 0.389 0.682 0.912 0.482 1.165</td><td></td><td></td></tr><tr><td>0.846 0.7 1.349 0.635 0.619 1.554 0.435 0.487 1.489 0.752 0.946 1.188 0.974 1.009 0.85</td><td></td><td></td></tr><tr><td>1.269 0.822 1.203 1.191 1.169 -1.592 0.135 1.225 -1.603 -0.071 1.018 -1.794 -0.288 1.113 -1.763</td><td></td><td></td></tr><tr><td>-0.394 1.334 -1.325 -0.19 0.955 -1.346 -0.396 0.728 -1.137 0.029 0.868 0.794 0.122 -0.903 -0.524</td><td></td><td></td></tr><tr><td>0.748 -0.693 0.269 -1.24 -0.704 0.718 0.231 -0.421 1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.</td><td>1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1</td><td>1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.</td></tr></table>
|
| 338 |
+
|
| 339 |
+
In addition, the factor $q$ in Eq. (1) is set to 5.0. The performance of the model is not sensitive to the choice of $q$ , because most of the 3D coordinates in our data (protein pockets and ligands) are in a limited range.
|
| 340 |
+
|
| 341 |
+
# B PRE-TRAINING
|
| 342 |
+
|
| 343 |
+
Data Source pockets for pre-training (3.2M), ligand conformations for pre-training (209M), and ground-truth protein-ligand complexes for docking fine-tuning (17K): https://github.com/deepmodeling/Uni-Mol/tree/main/unimol.
|
| 344 |
+
|
| 345 |
+
Docked protein-ligand complexes for pre-training and test set for pocket-aware 3D drug design: https://github.com/guanjq/targetdiff.
|
| 346 |
+
|
| 347 |
+
In addition, samples with the maximal difference in coordinates in one dimension greater than 40 are removed in order to filter out those outliers that account for less than $0.1\%$ data.
|
| 348 |
+
|
| 349 |
+
Model Scaling The standard dual-channel model used in our paper follows the configuration of the GPT-2 small model (Radford et al., 2019). An ablation study of pre-training is conducted to determine the appropriate scale for 3DMolFormer, where the pre-training loss on the ligand validation set is reported for each model size:
|
| 350 |
+
|
| 351 |
+
<table><tr><td>Layers</td><td>Heads</td><td>Embedding length</td><td>Pre-training Loss</td></tr><tr><td>8</td><td>8</td><td>256</td><td>0.325</td></tr><tr><td>12</td><td>8</td><td>256</td><td>0.254</td></tr><tr><td>12</td><td>12</td><td>256</td><td>0.229</td></tr><tr><td>12</td><td>12</td><td>768</td><td>0.178</td></tr><tr><td>16</td><td>12</td><td>768</td><td>0.178</td></tr><tr><td>16</td><td>16</td><td>768</td><td>0.180</td></tr></table>
|
| 352 |
+
|
| 353 |
+
The standard model size achieves the best performance compared with others, as a result of which it is utilized in our design.
|
| 354 |
+
|
| 355 |
+
Hyper-parameters The coefficient $\alpha$ in the loss function of Eq. (2) is set to 1.0. In an ablation study, we observe that the selection of $\alpha$ does not significantly affect the balance between CE loss and MSE loss:
|
| 356 |
+
|
| 357 |
+
<table><tr><td>α</td><td>CE Loss</td><td>MSE Loss</td></tr><tr><td>0.1</td><td>0.164</td><td>0.014</td></tr><tr><td>1.0</td><td>0.164</td><td>0.014</td></tr><tr><td>10.0</td><td>0.164</td><td>0.015</td></tr></table>
|
| 358 |
+
|
| 359 |
+
This may be because the errors on the token sequences and those on the numerical sequences converge respectively during the large-scale pre-training.
|
| 360 |
+
|
| 361 |
+
The selection of other hyper-parameters follows the common practice of the pre-training of large language models (Radford et al., 2019; Quinn et al., 2019).
|
| 362 |
+
|
| 363 |
+
# C PROTEIN-LIGAND BINDING POSE PREDICTION SUPPLEMENT
|
| 364 |
+
|
| 365 |
+
Docking Setup The exhaustiveness of all 4 search-based docking baselines in Table 1 is set to 8, following the settings in Uni-Mol (Zhou et al., 2023a).
|
| 366 |
+
|
| 367 |
+
Standard Errors As shown in Table 3, the standard errors of the 3DMolFormer performance results are obtained by 5 individual runs of supervised fine-tuning on protein-ligand docking. The minor standard errors further validate the robustness and soundness of 3DMolFormer.
|
| 368 |
+
|
| 369 |
+
Table 3: Standard Errors of 3DMolFormer performance results in Table 1.
|
| 370 |
+
|
| 371 |
+
<table><tr><td>Methods</td><td>%<1.0Å (↑)</td><td>%<2.0Å (↑)</td><td>%<3.0Å (↑)</td><td>%<5.0Å (↑)</td><td>Avg. (↓)</td></tr><tr><td>3DMolFormer</td><td>43.8±0.4</td><td>84.9±0.5</td><td>96.4±0.2</td><td>98.8±0.0</td><td>1.29±0.02</td></tr></table>
|
| 372 |
+
|
| 373 |
+
Additional Experiments on PoseBusters PoseBusters (Buttenschoen et al., 2024) s a widely-used benchmark for evaluating protein-ligand docking methods, particularly focusing on the challenges of blind docking, where the binding pocket information is not provided. However, in our study, we evaluate 3DMolFormer on PoseBusters using pocket information, providing a different evaluation context compared to typical PoseBusters assessments conducted for state-of-the-art docking approaches such as AlphaFold (Abramson et al., 2024), Chai-1 (Discovery et al., 2024), and Uni-Mol Docking V2 (Alcaide et al., 2024).
|
| 374 |
+
|
| 375 |
+
For experiments on PoseBusters, the blind docking baselines following the standard evaluation setup include AutoDock Vina (Trott & Olson, 2010), DiffDock (Corso et al., 2023), Uni-Mol Docking V2 (Alcaide et al., 2024), AlphaFold3 (Abramson et al., 2024), and Chai-1 (Discovery et al., 2024). For pocket-aware docking approaches including Uni-Mol (Zhou et al., 2023a) and our 3DMolFormer, we provide pocket information for docking. As shown in Table 4, our 3DMolFormer achieves a higher pocket-aware docking accuracy than Uni-Mol, which is also higher than the blind docking accuracy of all state-of-the-art baselines.
|
| 376 |
+
|
| 377 |
+
Table 4: Experimental results of protein-ligand binding pose prediction on PoseBusters benchmark.
|
| 378 |
+
|
| 379 |
+
<table><tr><td>Methods</td><td>%<2.0Å (↑)</td></tr><tr><td>AutoDock Vina</td><td>52.3</td></tr><tr><td>DiffDock</td><td>37.9</td></tr><tr><td>Uni-Mol Docking V2</td><td>77.6</td></tr><tr><td>AlphaFold3</td><td>76.3</td></tr><tr><td>Chai-1</td><td>77.1</td></tr><tr><td>Uni-Mol (pocket-aware)</td><td>74.8</td></tr><tr><td>3DMolFormer (pocket-aware)</td><td>81.5</td></tr></table>
|
| 380 |
+
|
| 381 |
+
# D POCKET-AWARE 3D DRUG DESIGN SUPPLEMENT
|
| 382 |
+
|
| 383 |
+
Clarification on the SA score It should be clarified that the SA score ranges in [1, 10] as defined in the original paper (Ertl & Schuffenhauer, 2009), where a lower score is better. Following the previous work on pocket-aware 3D drug design (Guan et al., 2023b), we report the linearly transformed SA score: $\mathrm{SA} = (10 - \mathrm{SA}_{\mathrm{origin}}) / 9 \in [0,1]$ , where a higher score is better.
|
| 384 |
+
|
| 385 |
+
Clarification on molecular diversity We do not include metrics for molecular diversity such as internal diversity Benhenda (2018) and Hamiltonian diversity Hu et al. (2024) in out evaluation, because existing metrics are all based on 2D graph structures, while pocket-aware 3D drug design is a 3D molecular generation task.
|
| 386 |
+
|
| 387 |
+
Generation Setup In addition, in the 3D drug design experiments no more than 100 molecules are generated by each baseline method for each protein pocket. For 3DMolFormer, exactly 100 unique molecules are generated and selected for each protein pocket, which is a more stringent requirement.
|
| 388 |
+
|
| 389 |
+
Case Study Visualization of the reference binding molecule and two molecules generated by 3DMolFormer on protein 4H3C:
|
| 390 |
+
|
| 391 |
+
Standard Errors and Ablation Study As shown in Table 5, the standard errors of the 3DMol-Former performance results are obtained by 5 individual runs of RL fine-tuning on pocket-aware 3D drug design. The minor standard errors further validate the robustness and soundness of 3DMol-Former.
|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
Reference
|
| 395 |
+
|
| 396 |
+

|
| 397 |
+
Designed Example 1
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
Designed Example 2
|
| 401 |
+
|
| 402 |
+
<table><tr><td>Molecule</td><td>Vina Dock</td><td>QED</td><td>SA</td></tr><tr><td>Reference</td><td>-8.0</td><td>0.55</td><td>0.91</td></tr><tr><td>Designed Example 1</td><td>-11.1</td><td>0.35</td><td>0.91</td></tr><tr><td>Designed Example 2</td><td>-10.6</td><td>0.48</td><td>0.75</td></tr></table>
|
| 403 |
+
|
| 404 |
+
In addition, we conduct an ablation study on 3DMolFormer. The variant 3DMolFormer w/o RL refers to freezing the GPT weights for RL fine-tuning, that is, generating molecules without fine-tuning for pocket-aware 3D drug design. The results indicate that the RL fine-tuning process is fundamental for this task.
|
| 405 |
+
|
| 406 |
+
Table 5: Standard Errors of 3DMolFormer performance results in Table 2, and results of the ablation study.
|
| 407 |
+
|
| 408 |
+
<table><tr><td>Methods</td><td>Vina Score (↓)</td><td>Vina Dock (↓)</td><td>QED (↑)</td><td>SA (↑)</td><td>Success Rate (↑)</td></tr><tr><td>3DMolFormer</td><td>-6.02±0.27</td><td>-9.48±0.18</td><td>0.49±0.01</td><td>0.78±0.01</td><td>85.3%±1.5%</td></tr><tr><td>3DMolFormer w/o RL</td><td>-4.20</td><td>-5.03</td><td>0.46</td><td>0.50</td><td>2.1%</td></tr></table>
|
| 409 |
+
|
| 410 |
+
Distribution of Generated Molecules Figure 4 demonstrates the distributions of molecular weights, logP values, and the number of rotatable bonds of the 10,000 molecules designed by 3DMolFormer for all the 100 targets reported in Table 2. It is worth mentioning that all three metrics are taken into account in drug-likeness, as measured by QED.
|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
Figure 4: The distributions of molecular weights, logP values, and the number of rotatable bonds of the molecules designed by 3DMolFormer.
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
|
| 419 |
+
Additional Evaluation by Delta Score Delta Score is a novel evaluation metric in structure-based drug design that emphasizes the specificity of molecular binding (Gao et al., 2024b). Unlike
|
| 420 |
+
|
| 421 |
+
traditional docking scores that can inflate results due to biases, Delta Score evaluates the selective affinity of a molecule for its target compared to other potential binding pockets, providing a more accurate measure of binding specificity and reducing the influence of promiscuous binding effects.
|
| 422 |
+
|
| 423 |
+
Results in Table 6 show that 3DMolFormer outperforms previous methods in terms of Delta Score, demonstrating its superior capability to generate molecules with higher specificity for their intended targets.
|
| 424 |
+
|
| 425 |
+
Table 6: Experimental results of Delta Score on pocket-aware 3D drug design.
|
| 426 |
+
|
| 427 |
+
<table><tr><td>Methods</td><td>Mean Delta Score (↑)</td></tr><tr><td>Reference</td><td>1.158</td></tr><tr><td>AR</td><td>0.393</td></tr><tr><td>Pocket2Mol</td><td>0.437</td></tr><tr><td>TargetDiff</td><td>0.335</td></tr><tr><td>DecompDiff</td><td>0.354</td></tr><tr><td>3DMolFormer</td><td>0.716</td></tr></table>
|
| 428 |
+
|
| 429 |
+
Additional Evaluation by PoseCheck
|
| 430 |
+
- Clash Score and Strain Energy are key metrics used in PoseCheck (Harris et al., 2023) to evaluate the physical plausibility and stability of protein-ligand poses in structure-based drug design.
|
| 431 |
+
- Clash Score assesses steric clashes between atoms in the generated pose, while Strain Energy quantifies the energetic distortion from ideal molecular conformations. Both metrics ensure that generated poses align with physical and chemical principles.
|
| 432 |
+
|
| 433 |
+
Table 7 demonstrates that 3DMolFormer outperforms baselines on both metrics, highlighting its ability to produce more physically realistic and energetically favorable docking poses.
|
| 434 |
+
|
| 435 |
+
Table 7: Experimental results of Delta Score on pocket-aware 3D drug design.
|
| 436 |
+
|
| 437 |
+
<table><tr><td>Methods</td><td>Mean Clash Score (↓)</td><td>Median Strain Energy (↓)</td></tr><tr><td>Reference</td><td>4.59</td><td>102.5</td></tr><tr><td>LiGAN</td><td>3.40</td><td>18693.8</td></tr><tr><td>Pocket2Mol</td><td>5.62</td><td>194.9</td></tr><tr><td>TargetDiff</td><td>9.08</td><td>1241.7</td></tr><tr><td>3DMolFormer</td><td>3.25</td><td>183.3</td></tr></table>
|
| 438 |
+
|
| 439 |
+
Furthermore, DrugPose (Jocys et al., 2024) offers a broad range of metrics for 3D drug discovery, its overlap with PoseCheck in the context of structure-based drug design makes PoseCheck a sufficient benchmark for our evaluation, ensuring comprehensive assessment without redundancy.
|
3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a880ca789b988e4f30745229ffc4a469f78b32e17610cb50628c6c67d03dd5a0
|
| 3 |
+
size 1515179
|
3dmolformeradualchannelframeworkforstructurebaseddrugdiscovery/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1889f6fb503aa295e1a1be1eea9d0c76e1eeed3daa05186c08cc849483d8d99b
|
| 3 |
+
size 549755
|
3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/7fe24d6b-0196-4bae-b5c6-92593a9ce526_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c241807beffa4bd8dc3f111f88f0b1eb3f1b66156cddea781a6a69d78df93b68
|
| 3 |
+
size 151690
|
3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/7fe24d6b-0196-4bae-b5c6-92593a9ce526_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c935f6ee34aa40d19613a92f36fadbdda6b3bae13a288645a20d9f8b5f080238
|
| 3 |
+
size 181640
|
3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/7fe24d6b-0196-4bae-b5c6-92593a9ce526_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bc7cc2094c1aaec724a239d2cb0ae1834e734175b525be22e0fea9dc9e9276b6
|
| 3 |
+
size 2647759
|
3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d01c7ce61aad14456cc4c9c89150c43657ff60c702283aa1eb2c42419ed34a38
|
| 3 |
+
size 1299392
|
3dmolt5leveragingdiscretestructuralinformationformoleculetextmodeling/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:466ce7937aa5cfbb744eb35237c0b9085f55bc4a767c3d74824d8a55650494b9
|
| 3 |
+
size 689654
|
3dpropertiesidentifyingchallengesindpoandchartingapathforward/8fedd307-9532-4e32-ad92-bd929756427e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd2826723ce96ae40a2fc3d6cb27051b05c2c7707dba07caaa75a7845fc34fb0
|
| 3 |
+
size 150815
|
3dpropertiesidentifyingchallengesindpoandchartingapathforward/8fedd307-9532-4e32-ad92-bd929756427e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8c3e38a2f55b037545d1a7a7cbea72184a5c12372f20ed7a3a64559c7f73a8b
|
| 3 |
+
size 175826
|
3dpropertiesidentifyingchallengesindpoandchartingapathforward/8fedd307-9532-4e32-ad92-bd929756427e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:632e5dd80438145b17c4628b1823b49fb859f8b1bc7f8779922d88e731849908
|
| 3 |
+
size 2070930
|
3dpropertiesidentifyingchallengesindpoandchartingapathforward/full.md
ADDED
|
@@ -0,0 +1,690 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D-PROPERTIES: IDENTIFYING CHALLENGES IN DPO AND CHARTING A PATH FORWARD
|
| 2 |
+
|
| 3 |
+
Yuzi Yan $^{1,3\text{时}}$ , Yibo Miao $^{2,3\text{时}}$ , Jialian Li $^{3}$ , Yipin Zhang $^{3}$ , Jian Xie $^{3}$ , Zhijie Deng $^{2*}$ , Dong Yan $^{3*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Department of Electronic Engineering, Tsinghua University
|
| 6 |
+
|
| 7 |
+
$^{2}$ Shanghai Jiao Tong University $^{3}$ Baichuan AI
|
| 8 |
+
|
| 9 |
+
yan-yz17@tsinghua.org.cn, {miaoyibo, zhijied}@sjtu.edu.cn,
|
| 10 |
+
|
| 11 |
+
lijialian7@163.com, zypzyp665@gmail.com, xiejian1990@gmail.com, sproblvem@gmail.com
|
| 12 |
+
|
| 13 |
+
# ABSTRACT
|
| 14 |
+
|
| 15 |
+
Aligning large language models (LLMs) with human preferences has gained significant attention, with Proximal Policy Optimization (PPO) as a standard yet computationally expensive method and Direct Preference Optimization (DPO) as a more efficient alternative. While DPO offers simplicity, it remains underutilized in state-of-the-art LLMs, suggesting potential limitations. In this work, we revisit DPO, analyzing its theoretical foundations and empirical performance to bridge this gap. We identify three key properties—termed 3D-properties—that emerge from DPO's learning process: Drastic drop in rejected response likelihood, Degradation into response suppression, and Dispersion effect on unseen responses. We show that these issues arise from DPO's optimization dynamics, where the interaction between chosen and rejected response gradients leads to instability. Our findings are supported by experiments on both a controlled toy model and real-world LLM tasks, including mathematical problem-solving and instruction following. To address these challenges, we propose simple regularization techniques that improve training stability and performance. Additionally, we examine how preference data distribution impacts DPO's effectiveness, offering insights into how alignment models handle out-of-domain (OOD) data. Our work connects these observations to broader research and provides a theoretical explanation for DPO's limitations. We hope these insights will guide future advancements in reward-model-free preference learning, bringing it closer to reward-model-based approaches.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Large language models (LLMs) have demonstrated exceptional performance across a wide range of tasks and domains (Touvron et al., 2023; Chowdhery et al., 2023; Jiang et al., 2023; Zhang et al., 2022). Several techniques have been developed for fine-tuning LLMs, most notably Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) (Achiam et al., 2023; Touvron et al., 2023). SFT involves directly training LLMs on labeled data to tailor their responses for specific tasks, whereas RLHF refines LLMs by incorporating feedback that aligns their outputs with human preferences. RLHF, in particular, has been instrumental in expanding the application of both closed-source (OpenAI, 2022; Anthropic, 2024; Team et al., 2023) and open-source LLMs (Touvron et al., 2023; Yang et al., 2023), driven by the need to align foundational models with human values and preferences (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022).
|
| 20 |
+
|
| 21 |
+
Existing RLHF methods can be majorly categorized into two classes based on whether the reward signal is explicitly modeled. Reward-model-based (RM-based) alignment pioneered by OpenAI (Ouyang et al., 2022; Achiam et al., 2023; Touvron et al., 2023) first trains a Reward Model (RM) from user preferences, typically through Maximum Likelihood Estimation (MLE), and then
|
| 22 |
+
|
| 23 |
+
leverages actor-critic algorithms such as Proximal Policy Optimization (PPO) (Schulman et al., 2017) to tune the SFT model to realize alignment. This approach often requires substantial computational resources and suffers from sample inefficiency (Choshen et al., 2019). Conversely, another class of methods, known as reward-model-free (RM-free) alignment, such as Direct Preference Optimization (DPO) (Rafailov et al., 2024), Identity Preference Optimization (IPO) (Azar et al., 2024), Sequence Likelihood Calibration (SLiC) (Zhao et al., 2023), DPO-positive (Pal et al., 2024) and Simple Preference Optimization (SimPO) (Meng et al., 2024), do not rely on an extra RM. These approaches offer a more resource-efficient alternative by optimizing the policy directly from preferences, therefore attracting much attention from the academic community, where computational resources are often limited.
|
| 24 |
+
|
| 25 |
+
In this work, we begin our analysis by using the vanilla DPO as a case study, subsequently extending our findings to encompass broader RM-free alignment strategies. Despite its simplicity and promise, DPO has exhibited several perplexing phenomena that remain unclear or underexplained in practice. One notable counter-intuitive observation is that the likelihood of both preferred and rejected responses tends to decrease over the course of DPO training (Yuan et al., 2024; Mitchell, 2023), while the likelihood of certain tokens diverging from the training data increases (Xu et al., 2024a). Additional observations are summarized in Section 2.1. Without a deeper theoretical exploration of these phenomena, purely empirical efforts to apply or improve DPO are likely to face inefficiencies.
|
| 26 |
+
|
| 27 |
+
Our work identifies the issues surrounding vanilla DPO and its variants from both theoretical and practical perspectives. The analysis reveals inherent instability in the DPO training process, which we encapsulate as the 3D-properties: Drastic drop in the likelihood of rejected responses, Degradation into response suppression, and Dispersion effect on unseen responses. Through our analytical framework, we show that these phenomena stem from the inherent features of DPO's optimization objective, where the interaction between the gradients of chosen and rejected responses leads to instability and hinders overall performance. Furthermore, our findings confirm that the distribution of preference data critically influences DPO's effectiveness, with on-policy DPO performing better than off-policy DPO, which is consistent with concurrent empirical studies (Tang et al., 2024; Guo et al., 2024).
|
| 28 |
+
|
| 29 |
+
To enhance DPO's stability and performance, we propose several regularization methods, including the adaptive adjustment of weights on the gradients of chosen and rejected responses, as well as incorporating an SFT loss into the objective. Our results suggest a fundamental trade-off within the DPO algorithm: balancing the mitigation of the 3D-properties while preventing LLMs from straying too far from the preference learning paradigm. Additionally, we compare DPO with the state-of-the-art RM-based method, RLHF-PPO, revealing that its superiority stem largely from avoiding the 3D-properties. Our experimental approach begins with the design of a toy model to quickly validate our hypotheses, followed by a rigorous test of the actual performance of real LLMs on tasks such as mathematical problem solving and instruction following.
|
| 30 |
+
|
| 31 |
+
As this topic has garnered significant attention recently, an increasing number of works are contributing to the discussion. To highlight the contributions of our approach, we compare our findings with several of the most relevant concurrent studies in Section 2.2. A comprehensive review of related works is provided in Appendix A.
|
| 32 |
+
|
| 33 |
+
# 2 PRELIMINARIES
|
| 34 |
+
|
| 35 |
+
Large Language Model (LLM). An LLM defines a $\theta$ -parameterized conditional distribution $\pi_{\theta}(a|x)$ , which takes a prompt $x$ as input and produces a response $a$ . More specifically, the sampling from LLMs is performed in an auto-regressive manner, $\pi_{\theta}(a|x) = \prod_t \pi_{\theta}(a_t|x, a_{1:t-1})$ , where $a_t$ is the $t$ -th token in the response $a$ and $a_{1:t-1}$ are tokens in the response before $a_t$ .
|
| 36 |
+
|
| 37 |
+
RM-based RLHF. Training LLMs typically involves three stages: Pretraining, SFT, and RLHF. We outline the standard PPO paradigm here, which is a typical RM-based RLHF algorithm. Beginning with a well-trained SFT model, denoted as $\pi_0$ , we proceed by sampling two responses from $\pi_0$ for each instance in a given prompt set. Subsequently, we compile a preference dataset $\mathcal{D} = \{(x, a^+, a^-)\}$ , where $a^+$ and $a^-$ denote human-preferred and human-dispreferred completions, respectively. The distribution of the preference dataset is assumed to follow the Bradley-Terry
|
| 38 |
+
|
| 39 |
+
model (Bradley & Terry, 1952), i.e., the probability of response $a^+$ is better than $a^-$ is given by:
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
p _ {r} \left(a ^ {+} \succ a ^ {-} \mid x\right) = \frac {\exp \left(r \left(x , a ^ {+}\right)\right)}{\exp \left(r \left(x , a ^ {+}\right)\right) + \exp \left(r \left(x , a ^ {-}\right)\right)} = \sigma \left(r \left(x, a ^ {+}\right) - r \left(x, a ^ {-}\right)\right), \tag {1}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
where $\succ$ represents the preference relation, and $\sigma(x) = \frac{1}{1 + e^{-x}}$ is the sigmoid function. To train a RM $r(\cdot, \cdot)$ , we maximize the log-likelihood of the observed preferences by minimizing the following loss function:
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\ell_ {R} (r) = - \sum_ {(x, a ^ {+}, a ^ {-})} \log p _ {r} \left(a ^ {+} \succ a ^ {-} | x\right) = - \sum_ {(x, a ^ {+}, a ^ {-})} \log \sigma \left(r \left(x, a ^ {+}\right) - r \left(x, a ^ {-}\right)\right). \tag {2}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
During the reinforcement learning phase, we update the LLM to maximize the return from the learned RM using the following objective function:
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\max _ {\theta} J _ {r} (\theta) = \max _ {\theta} \sum_ {x} \mathbb {E} _ {a \sim \pi_ {\theta} (\cdot | x)} \left[ r (x, a) - \beta \log \frac {\pi_ {\theta} (a | x)}{\pi_ {0} (a | x)} \right], \tag {3}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where $\pi_{\theta}$ is initialized as $\pi_0$ and $\beta$ controls the deviation from the original model. PPO (Schulman et al., 2017) is typically used to solve the problem in practice. Algorithms that optimize the policy using a separate RM are referred to as $RM$ -based alignment.
|
| 58 |
+
|
| 59 |
+
DPO. Instead of learning a separate RM, DPO (Rafailov et al., 2024) directly optimizes the policy $\pi_{\theta}$ over preference data. DPO implicitly leverages a particular choice of RM parameterization that enables the extraction of its optimal policy in closed form, without a reinforcement learning training loop:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\ell^ {\mathrm {D P O}} (\theta) = - \sum_ {(x, a ^ {+}, a ^ {-})} \log \sigma \left[ \beta \log \frac {\pi_ {\theta} (a ^ {+} | x)}{\pi_ {0} (a ^ {+} | x)} - \beta \log \frac {\pi_ {\theta} (a ^ {-} | x)}{\pi_ {0} (a ^ {-} | x)} \right]. \tag {4}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
As shown, DPO leverages logistic regression loss to directly fine-tune the LLM on preference data. This approach, along with its various variants (Zhao et al., 2023; Amini et al., 2024; Azar et al., 2024), is referred to as $RM$ -free alignment due to the elimination of an explicit RM.
|
| 66 |
+
|
| 67 |
+
# 2.1 UNDEREXPLORED OBSERVATIONS ABOUT DPO
|
| 68 |
+
|
| 69 |
+
Though the absence of the need for additional RM training makes DPO particularly attractive, several observations remain underexplored. The most concerning issue is, to the best of our knowledge, few models using DPO (or other RM-free algorithm) have achieved performance comparable to the state-of-the-art closed-source LLMs such as OpenAI's GPT-4o or Anthropic's Claude, which reportedly use PPO methods during training. Besides, many other phenomena have been reported but lack comprehensive theoretical explanations. Here we make a summary for clarity.
|
| 70 |
+
|
| 71 |
+
Observation 1. During the vanilla DPO training, the likelihood of both the chosen and rejected responses in the preference datasets tends to decrease, whereas the likelihood of unseen tokens not appearing in the preference pairs tends to increase (Mitchell, 2023).
|
| 72 |
+
|
| 73 |
+
Observation 2. Compared with RM-based alignment, the performance of DPO is relatively unstable and sub-optimal (Wang et al., 2024).
|
| 74 |
+
|
| 75 |
+
Observation 3. The performance of DPO is significantly affected by the distribution shift between the model outputs and the preference dataset. In general, on-policy DPO, where both the chosen responses and the rejected responses are sampled from the policy model $\pi_{\theta}$ , outperforms other scenarios (Tang et al., 2024).
|
| 76 |
+
|
| 77 |
+
# 2.2 COMPARISON WITH RELATED CONTEMPORARY STUDIES
|
| 78 |
+
|
| 79 |
+
Several concurrent studies attempt to explain these observations, and we highlight our differences to underscore our contributions. For Observation 1, Feng et al. (2024) shares similar points of view regarding the degradation on gradients but offers limited analysis and lacks experimental validation on real LLMs. In contrast, our work offers a more rigorous and comprehensive analysis supported by theoretical insights and validation across toy models and large-scale real-world LLM experiments.
|
| 80 |
+
|
| 81 |
+
For Observation 2, Xu et al. (2024a) point out that the policy minimizing the PPO loss is a subset of that minimizing the DPO loss, which offers a partial explanation. However, their analysis focuses solely on the endpoint of the optimization and does not examine the dynamic process by which the policy evolves. It leaves unaddressed how unexpected policies emerge during training. In contrast, our gradient analysis offers a comprehensive understanding of the entire optimization trajectory, shedding light on how and why sub-optimal policies arise throughout the DPO training process. For Observation 3, Tang et al. (2024) investigate the performance gap between on-policy and off-policy alignment algorithms from an empirical perspective, while our insights are rooted in theoretical findings.
|
| 82 |
+
|
| 83 |
+
Our work advances the understanding of these observations, providing critical insights into the underlying mechanisms and reinforcing the findings of these concurrent studies. In the following section, we will present our theoretical explanations for these observations.
|
| 84 |
+
|
| 85 |
+
# 3 FUNDAMENTAL LIMITATIONS OF VANILLA DPO: 3D-PROPERTIES
|
| 86 |
+
|
| 87 |
+
We first identify a critical flaw inherent in vanilla DPO. At first glance, the loss function of vanilla DPO, as defined in Eq. (4), appears to be composed of two parts: the term $\log \frac{\pi_{\theta}(a^{+}|x)}{\pi_{0}(a^{+}|x)}$ aims to increase the likelihood of chosen responses, while the term $\log \frac{\pi_{\theta}(a^{-}|x)}{\pi_{0}(a^{-}|x)}$ seeks to decrease the likelihood of rejected responses. However, this seemingly straightforward interpretation overlooks significant underlying issues, which we characterize through the 3D-properties of vanilla DPO.
|
| 88 |
+
|
| 89 |
+
Property 1 (Drastic drop in rejected response likelihood). The likelihood of a rejected response tends to shift much more rapidly than that of a chosen response.
|
| 90 |
+
|
| 91 |
+
Property 2 (Degradation into response suppression). As optimization progresses, DPO gradually loses its ability to steer the direction of optimizing chosen responses and instead devolves into merely suppressing the rejected responses.
|
| 92 |
+
|
| 93 |
+
Property 3 (Dispersion effect on unseen responses). As DPO training progresses, the likelihood of both chosen and rejected responses gradually decreases, while the likelihood of generating out-of-distribution (OOD) responses increases.
|
| 94 |
+
|
| 95 |
+
These properties are non-trivial, as they reveal inherent challenges in DPO's optimization process that are not immediately apparent from the loss function alone. These phenomena closely align with the empirical observations we discussed earlier, pointing to the structural limitations of DPO. In the following section, we delve into a theoretical analysis to further explain the origins of these 3D-properties and provide insights into their impact on the optimization trajectory.
|
| 96 |
+
|
| 97 |
+
# 3.1 THEORETICAL FOUNDATIONS
|
| 98 |
+
|
| 99 |
+
In this sections, we provide the theoretical foundations for the 3D-properties, followed by detailed explanations of the observations discussed in Section 2.1. The loss function for DPO as shown in Eq. (4) can be re-written by:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\ell^ {\mathrm {D P O}} (\theta) = \sum_ {(x, a ^ {+}, a ^ {-})} \log \left(1 + \left(\frac {\pi_ {0} (a ^ {+} | x)}{\pi_ {0} (a ^ {-} | x)} \cdot \frac {\pi_ {\theta} (a ^ {-} | x)}{\pi_ {\theta} (a ^ {+} | x)}\right) ^ {\beta}\right). \tag {5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
For a given triple $(x, a^{+}, a^{-})$ , let
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\alpha := \left(\frac {\pi_ {0} (a ^ {+} | x)}{\pi_ {0} (a ^ {-} | x)}\right) ^ {\beta}, \quad \pi^ {+} := \pi_ {\theta} (a ^ {+} | x), \quad \pi^ {-} := \pi_ {\theta} (a ^ {-} | x), \quad z := \frac {\pi_ {\theta} (a ^ {-} | x)}{\pi_ {\theta} (a ^ {+} | x)} = \frac {\pi^ {-}}{\pi^ {+}}.
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
Then we have
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {+}} = \frac {\partial \log (1 + \alpha z ^ {\beta})}{\partial z} \frac {\partial z}{\partial \pi^ {+}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \frac {\partial z}{\partial \pi^ {+}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \left[ - \frac {\pi^ {-}}{(\pi^ {+}) ^ {2}} \right],
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {-}} = \frac {\partial \log (1 + \alpha z ^ {\beta})}{\partial z} \frac {\partial z}{\partial \pi^ {-}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \frac {\partial z}{\partial \pi^ {-}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \left(\frac {1}{\pi^ {+}}\right),
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
with these simplified forms, we can obtain the following corollaries to explain the properties above:
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
Figure 1: Toy model setup. Top left: the optimal policy where the highlighted blocks represent optimal responses. Top right: preference dataset construction. Lower left: the initialization of the SFT model. Lower right: policy output after DPO training.
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
|
| 132 |
+
Corollary 1 (Explanation for Property 1). The ratio of the gradient with respect to the rejected response likelihood $\pi^{-}$ to the gradient with respect to the chosen response likelihood $\pi^{+}$ is equal to the ratio of $\pi^{+}$ to $\pi^{-}$ :
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {-}} / \frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {+}} = \frac {\partial z}{\partial \pi^ {-}} / \frac {\partial z}{\partial \pi^ {+}} = - \frac {\pi^ {+}}{\pi^ {-}},
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
which indicates that as $\pi^{+}$ increases and $\pi^{-}$ decreases, the gradient with respect to $\pi^{-}$ grows faster, leading to a more rapid decline in the likelihood of the rejected response.
|
| 139 |
+
|
| 140 |
+
Corollary 2 (Explanation for Property 2). As $\pi^{-} \to 0$ , we have $z \to 0$ and $\frac{\alpha\beta}{1 + \alpha z^{\beta}} \to \alpha\beta$ . Thus,
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\frac {\partial \ell^ {D P O}}{\partial \pi^ {+}} \rightarrow - \alpha \beta (\pi^ {+}) ^ {- \beta - 1} (\pi^ {-}) ^ {\beta} \rightarrow 0, \quad \frac {\partial \ell^ {D P O}}{\partial \pi^ {-}} \rightarrow \alpha \beta (\pi^ {+}) ^ {- \beta} (\pi^ {-}) ^ {\beta - 1} \rightarrow \infty ,
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
given that $\beta < 1$ and $\pi^{-} \to 0$ . This creates a dynamic where the gradient for the rejected response grows exceedingly large, while the gradient for the chosen response diminishes significantly. As a result, DPO progressively shifts its focus to suppressing the rejected responses and loses the ability to steer the direction of optimizing chosen responses.
|
| 147 |
+
|
| 148 |
+
Corollary 3 (Explanation for Property 3). When $\pi^{-}$ drastically drops to 0, the gradient on $\pi^{+}$ fails and the likelihood of the chosen response is likely to decrease along with the rejected response as they often share many similar tokens and patterns. The constancy of the sum of probabilities implies that as both $\pi^{+}$ and $\pi^{-}$ decrease, the likelihood will randomly disperse into other unseen responses out of the preference dataset.
|
| 149 |
+
|
| 150 |
+
Based on these theoretical insights, we can further explore and explain the observations discussed in Section 2.1. Observation 1 is directly explained by Corollary 3. For Observation 2, we will show that 3D-properties do not manifest during the RM training process in the RM-based alignment pipeline. Regarding Observation 3, we will demonstrate that the distribution gap between the LLM's original outputs and the preference dataset plays a crucial role in determining the influence of the 3D-properties. The impact of these properties is notably less pronounced in on-policy DPO, where the preference dataset is sampled directly from the policy model's outputs. In the following sections, we will delve deeper into each of these statements and provide empirical validation.
|
| 151 |
+
|
| 152 |
+
# 3.2 SYNTHETIC VALIDATION WITH A TOY MODEL
|
| 153 |
+
|
| 154 |
+
In this section, we introduce a simplified toy model specifically designed to facilitate synthetic experiments, thereby enhancing the persuasiveness of our arguments from Corollary 1 to 3. Then we conduct experiments on real LLMs.
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
Figure 2: Dynamic optimization process with vanilla DPO using the toy model. Left: likelihood dynamics over training epochs. The blue curve represents the average likelihood of chosen responses, yellow shows the minimum for chosen responses, green represents the average for rejected responses, red shows the maximum for rejected responses, and purple represents the average for unseen responses. Middle: dynamics of averaged $\frac{\partial\ell^{DPO}}{\partial\pi^{+}}$ and $\frac{\partial\ell^{DPO}}{\partial\pi^{-}}$ over training epochs. Right: likelihood dynamics over training epochs on a log scale, highlighting the drastic drop in the likelihood of rejected responses.
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
|
| 163 |
+
# 3.2.1 TOY MODEL SETUP
|
| 164 |
+
|
| 165 |
+
The diagram for the toy model is illustrated in Figure 1. We construct a discrete space consisting of 4 prompts and 10 responses. The policy $\pi_{\theta}$ , which simulates a simplified version of an LLM, is implemented as a three-layer MLP that processes a one-hot vector and outputs a categorical distribution over the responses. The response space is organized such that the first 4 dimensions correspond to chosen responses, dimensions 5 through 8 represent rejected responses, and the final 2 dimensions correspond to unseen responses not present in the preference dataset.
|
| 166 |
+
|
| 167 |
+
In this setup, each prompt has an optimal response (e.g., response 1 is optimal for prompt 1, as shown in the upper left figure). When constructing the preference dataset for DPO training, we adopt a mini-batch sampling strategy to mimic real-world annotation processes. Specifically, assuming an ideal annotator, each input prompt is perfectly matched with its optimal response—corresponding to the diagonal elements of the matrix, as illustrated in the upper right figure in Figure 1. For each mini-batch, we then randomly select one other response within the batch to create preference data pairs. This approach ensures that gradient updates are computed from diverse mini-batch samples.
|
| 168 |
+
|
| 169 |
+
To simulate the Pretraining and SFT process, we manually assign output probabilities and use them as labels to train $\pi_{\theta}$ . Initially, as shown in the lower left figure, we set the likelihood of both chosen and rejected responses at 0.12, treating both as on-policy. The constructed preference dataset is then used for DPO training, with the output after 500 epochs shown in the lower right. The code is provided in the supplementary material.
|
| 170 |
+
|
| 171 |
+
# 3.2.2 RESULTS
|
| 172 |
+
|
| 173 |
+
Figure 2 illustrates the dynamic optimization process during DPO training. In the first figure, the likelihood of chosen responses (blue and yellow curves) increases, while the likelihood of rejected responses (green and red curves) decreases in the early phases of training. However, as training progresses, the likelihood of chosen responses begins to decline in the longer run. During this degradation phase, as both chosen and rejected response likelihoods decrease, the probability is redistributed to unseen responses (purple curve).
|
| 174 |
+
|
| 175 |
+
The second and third figures reveal the underlying causes of this shift: as $\pi_{\theta}(a^{-}|x)$ approaches zero, the absolute value of $\partial \ell^{\mathrm{DPO}} / \partial \pi^{-}$ increases sharply compared to $\partial \ell^{\mathrm{DPO}} / \partial \pi^{+}$ . The absolute value of $\partial \ell^{\mathrm{DPO}} / \partial \pi^{+}$ becomes progressively smaller, weakening its influence on the optimization direction. These results align with the earlier theoretical analysis.
|
| 176 |
+
|
| 177 |
+
Moreover, this insight provides an explanation for the observed superiority of on-policy DPO (Observation 3). In contrast to off-policy DPO, on-policy DPO begins with a higher likelihood for rejected responses, thereby extending the duration before their likelihood significantly diminishes. To further validate the differing impacts of on-policy and off-policy DPO, we configure four scenarios by adjusting the initial distribution of outputs to simulate these conditions. A higher initialized likelihood (0.12) simulates responses sampled in an on-policy manner, while a lower one (0.02) simulates responses sampled off-policy. The initial state and the subsequent changes in the likelihood
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
Figure 3: From left to right, the figures show the initial state and the likelihood dynamics for chosen/rejected/unseen responses in Scenarios 1 to 4, similar to the left diagram in Figure 2: (1) both chosen and rejected responses are on-policy, (2) chosen off-policy and rejected on-policy, (3) chosen on-policy and rejected off-policy, and (4) both off-policy.
|
| 181 |
+
|
| 182 |
+
of each response are illustrated in Figure 3. Notably, in Scenario 1, where both chosen and rejected responses are on-policy, the 3D-properties are relatively mild, as shown by the high peak probability of the optimal response (approximately 0.6) and the minimal dispersion effect on unseen responses. Additional detailed results and analyses for all four scenarios can be found in Appendix D.
|
| 183 |
+
|
| 184 |
+
The intention of the toy model and its connection to real LLMs. The toy model serves as an abstract simulation that amplifies the effect of 3D-properties, which are less pronounced and harder to visualize in real-world experiments. While the toy model differs from real LLM training in several ways—such as sampling frequency—its design offers useful insights. In real-world settings, DPO is typically trained over one epoch, with each data point used only a few times. In contrast, in the toy model, the same data points are sampled repeatedly. Conceptually, this is similar to treating each input/output as a token rather than a complete prompt/response, where each token may be sampled multiple times during real-world training. Since both the chosen and rejected responses are generated from the same prompt, they often share common tokens. As a result, the decrease in the likelihood of rejected responses can impact the likelihood of chosen responses, leading to a corresponding decline in their likelihood.
|
| 185 |
+
|
| 186 |
+
# 3.3 REGULARIZATION TECHNIQUES
|
| 187 |
+
|
| 188 |
+
It becomes evident that the rate at which $\pi_{\theta}(a^{-}|x)$ declines is crucial in determining the severity of the 3D-properties' impact. This observation leads to the following proposition:
|
| 189 |
+
|
| 190 |
+
Proposition 1. To lessen the severity of the 3D-properties, it is advantageous to moderate the rate at which the likelihood of rejected responses declines.
|
| 191 |
+
|
| 192 |
+
Inspired by Proposition 1, we introduce two straightforward regularization techniques. The first technique employs adaptive values of $\beta$ to control the rate at which the likelihood of rejected responses declines, referred as Flex-DPO. The second technique involves augmenting the DPO loss with an SFT loss, a strategy that has been shown to significantly enhance the stability of DPO in previous studies (Hou et al., 2024; Xu et al., 2024b). These regularization methods have shown promising results with our toy model and will be further validated in real LLMs in the following section. The theoretical analysis is similar to that of vanilla DPO thus deferred to Appendix B.2.
|
| 193 |
+
|
| 194 |
+
# 3.4 INHERENT ABSENCE OF 3D-PROPERTIES IN RM-BASED ALIGNMENT
|
| 195 |
+
|
| 196 |
+
In this section, we show that 3D-properties do not manifest in RM-based alignment methods, which may account for why DPO methods only achieve sub-optimal performance. Since DPO is closely related to RM training, and the Best-of-N performance of the RM can partially reflect the ultimate performance of the policy model (Gui et al., 2024), we focus on analyzing the RM's objective. For a given $(x, a^{+}, a^{-})$ , let $r^{+} := r(a^{+}|x)$ and $r^{-} := r(a^{-}|x)$ , the gradients with respect to $r^{+}$ and $r^{-}$ are:
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
\frac {\partial \ell^ {\mathrm {R M}}}{\partial r ^ {+}} = \frac {\partial \log (1 + e ^ {(r ^ {-} - r ^ {+})})}{\partial r ^ {+}} = - \frac {e ^ {(r ^ {-} - r ^ {+})}}{1 + e ^ {(r ^ {-} - r ^ {+})}} = - \frac {1}{1 + e ^ {(r ^ {+} - r ^ {-})}},
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+
$$
|
| 203 |
+
\frac {\partial \ell^ {\mathrm {R M}}}{\partial r ^ {-}} = \frac {\partial \log (1 + e ^ {(r ^ {-} - r ^ {+})})}{\partial r ^ {-}} = \frac {e ^ {(r ^ {-} - r ^ {+})}}{1 + e ^ {(r ^ {-} - r ^ {+})}} = \frac {1}{1 + e ^ {(r ^ {+} - r ^ {-})}}.
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
This indicates that the gradients for the chosen and rejected responses are balanced and do not exhibit 3D-properties. In Section 4.5, we will further discuss the relationship between DPO and RM-based alignment in real LLMs.
|
| 207 |
+
|
| 208 |
+
# 4 EXPERIMENTS
|
| 209 |
+
|
| 210 |
+
In this section, we transition from theoretical analyses and toy model simulations to real-world experiments with LLMs to further validate our theoretical insights. We verify the existence of 3D properties, the superiority of on-policy DPO over off-policy DPO, the superiority of RM over DPO, and the effectiveness of the proposed regularization technique.
|
| 211 |
+
|
| 212 |
+
# 4.1 EXPERIMENTAL SETUP
|
| 213 |
+
|
| 214 |
+
Datasets. We chose mathematical reasoning and instruction following as our primary benchmarks because these tasks are easily quantifiable, providing clear metrics for evaluating model performance. For mathematical reasoning, we used MATH (Hendrycks et al., 2021) as the main dataset for both training and testing<sup>1</sup>. To assess the model's out-of-distribution (OOD) generalization capabilities, we selected SuperCLUE-Math (Xu et al., 2020), another dataset which was used exclusively for testing. Additionally, we included two in-house datasets focused on poem and slogan generation to evaluate the model's ability to handle creative tasks with strict structural and linguistic constraints. The poem dataset, for instance, which has rigid format and rhyme requirements, making it a good test for evaluating the model's ability to follow complex instructions. Further details and descriptions of the datasets used are provided in Appendix C.
|
| 215 |
+
|
| 216 |
+
It is widely accepted in industry that preference datasets for model alignment should cover a broad range of domains. Following this consensus, we further utilized a general dataset consisting of approximately 400,000 preference samples across diverse domains. These prompts were sourced from HH-rlhf (Bai et al., 2022) and UltraFeedback (Cui et al., 2024). A detailed breakdown of the dataset sizes is provided in Table 6 in Appendix C.
|
| 217 |
+
|
| 218 |
+
The LLMs of concern. We focus on Baichuan2-13B and Baichuan2-33B, an advanced bilingual (Chinese and English) LLM series. The 13B model is openly available (Yang et al., 2023), and the 33B model extends the 13B architecture with increased parameters.
|
| 219 |
+
|
| 220 |
+
# 4.2 EFFECT OF TRAINING DATA DISTRIBUTION: ON-POLICY VS. OFF-POLICY
|
| 221 |
+
|
| 222 |
+
Building on the theoretical insights in Section 3, we hypothesize that the performance of vanilla DPO is significantly influenced by the distribution gap between the training dataset and outputs of the policy model, specifically whether the algorithm is on-policy or off-policy. Off-policy DPO uses an external preference dataset, while on-policy DPO samples preferences directly from the policy model. On-policy DPO enjoys a smaller distribution gap compared with off-poliy DPO.
|
| 223 |
+
|
| 224 |
+
To conduct on-policy DPO, we used the policy model to produce 8 candidates for each prompt in the train set of MATH. The best and worst responses were selected by GPT-4 (Achiam et al., 2023) to form a preference pair, with the standard solutions given as the reference context. After filtering out uniformly good or bad responses, we compiled the $\mathbf{MATH^{*}}$ dataset, which contains 5,826 pairs $\{x,a^{+},a^{-}\}$ . We randomly selected 2,000 samples from the original test set to serve as the test set for $\mathbf{MATH^{*}}$ . For off-policy DPO, we used the original solutions from the dataset as the chosen responses, and generated the rejected responses using Qwen1.5-7B (Bai et al., 2023), a relatively earlier model with limited capabilities.
|
| 225 |
+
|
| 226 |
+
To validate the hypothesis that the presence of off-policy data weakens performance, we implemented the four scenarios consistent with the toy model (Figure 3). We evaluated the policy model using GPT-4, which assigned scores ranging from 1 to 5 based on the accuracy of both the final answer and the problem-solving process, with also the standard solutions given as the reference
|
| 227 |
+
|
| 228 |
+
Table 1: Results tested on MATH* and SuperCLUE-Math. Scenario 1, with both chosen and rejected responses sampled on-policy, shows the best performance.
|
| 229 |
+
|
| 230 |
+
<table><tr><td rowspan="2">Setting</td><td colspan="2">Baichuan2-13B</td><td colspan="2">Baichuan2-33B</td></tr><tr><td>5 points</td><td>4&5 points</td><td>5 points</td><td>4&5 points</td></tr><tr><td>basemodel</td><td>32.237%</td><td>42.539 %</td><td>44.485%</td><td>53.229%</td></tr><tr><td>DPO in Scenario 1</td><td>37.132%</td><td>47.082%</td><td>47.465%</td><td>54.759%</td></tr><tr><td>DPO in Scenario 2</td><td>32.860%</td><td>43.445%</td><td>44.216%</td><td>51.409%</td></tr><tr><td>DPO in Scenario 3</td><td>28.323%</td><td>41.576%</td><td>44.473%</td><td>53.924%</td></tr><tr><td>DPO in Scenario 4</td><td>26.833%</td><td>37.685%</td><td>46.618%</td><td>54.648%</td></tr></table>
|
| 231 |
+
|
| 232 |
+
Table 2: Test results on the self-built Poem and Slogan datasets. All metrics are evaluated such that higher values indicate better performance.
|
| 233 |
+
|
| 234 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">Row Number</td><td colspan="4">Poem</td><td colspan="2">Slogan</td></tr><tr><td>Words per Row</td><td>Rhythm</td><td>Tone Pattern</td><td>Title</td><td>Word Count</td><td>Content</td></tr><tr><td>Base</td><td>0.75</td><td>0.61</td><td>0.64</td><td>0.60</td><td>0.51</td><td>0.34</td><td>0.57</td></tr><tr><td>PPO</td><td>0.91</td><td>0.79</td><td>0.87</td><td>0.82</td><td>1</td><td>0.47</td><td>0.78</td></tr><tr><td>DPO</td><td>0.93</td><td>0.75</td><td>0.83</td><td>0.75</td><td>0.78</td><td>0.45</td><td>0.70</td></tr></table>
|
| 235 |
+
|
| 236 |
+
context. The scoring criteria are detailed in Table 4. The average performance on the $\mathbf{MATH^{*}}$ and SuperCLUE-Math datasets is reported in Table 1, with specific results in Table 8. Among the four scenarios, Scenario 1—where both chosen and rejected responses are on-policy—ensured a more stable DPO training process and delivered the best performance.
|
| 237 |
+
|
| 238 |
+
Additionally, we report the log probabilities before and after training in Table 7 in Appendix D.2. According to Proposition 1, the key factor affecting the impact of 3D-properties is the decline rate of rejected responses' likelihood, $\log \pi(a^{-})$ . Scenario 1 shows the slowest decline in likelihood compared to the other scenarios, effectively mitigating the adverse effects of 3D-properties, which explains the superior performance of on-policy DPO in our tests.
|
| 239 |
+
|
| 240 |
+
We also plot the gradients during the DPO training process for Scenario 1, as shown in Figure 6 in the Appendix D.2. This visualization supports the analysis in Section 3, demonstrating that the gradients for rejected responses increase more rapidly during training. This excessive decline in the likelihood of generating rejected responses can ultimately lead to model degradation.
|
| 241 |
+
|
| 242 |
+
In addition to pure on-policy and off-policy DPO, Scenario 2, where the chosen response is off-policy and the rejected response is on-policy, is also prevalent in industry. For instance, in math problems, researchers often treat the correct dataset solution as the chosen response and the LLM-generated incorrect answer as the rejected one, which we demonstrate to be detrimental. Scenario 3 is a mirror experiment for Scenario 2. These experiments confirm that incorporating off-policy data into the preference training set degrades DPO performance.
|
| 243 |
+
|
| 244 |
+
# 4.3 EXPERIMENTAL VALIDATION OF REGULARIZATION TECHNIQUES
|
| 245 |
+
|
| 246 |
+
Following Flex-DPO, the regularization methods outlined in Section 3.3, we fixed $\beta^{+}$ and systematically decreased $\beta^{-}$ . As indicated by the gradient analysis in Appendix B.2.1, the gradient of rejected responses with respect to $\beta^{-}$ follows a non-monotonic trajectory, initially increasing and then decreasing. Reducing $\beta^{-}$ on the left side of this extreme point can effectively reduce the gradient magnitude. However, indiscriminately minimizing the gradient is not always advantageous. As illustrated in Figure 4, model performance does not consistently improve with an excessively small $\beta^{-}$ (see the trend with $\beta^{-} < 0.08$ ). Over-reduction of $\beta^{-}$ risks causing the DPO algorithm to deviate from the preference learning
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
Figure 4: Performance on poem generation, $\beta^{-}$ varying with $\beta^{+} = 0.1$
|
| 250 |
+
|
| 251 |
+
paradigm and regress toward behavior akin to the SFT algorithm, ultimately compromising its generalization capabilities. This finding warrants further investigation, and while preliminary insights are discussed in Appendix D, deeper exploration is needed. This aspect will be addressed in future research.
|
| 252 |
+
|
| 253 |
+
Additionally, we tested other DPO variants, such as IPO and SLiC, on the MATH* and SuperCLUE-Math datasets, with results presented in Table 9. Flex-DPO consistently outperforms vanilla DPO, IPO and SLiC, highlighting the effectiveness of the proposed regularization techniques.
|
| 254 |
+
|
| 255 |
+
# 4.4 RELATIVE INSTABILITY OF DPO TRAINING COMPARED TO RM TRAINING
|
| 256 |
+
|
| 257 |
+
To assess the stability and performance gap between DPO and RM training, we conducted a parallel comparison using identical datasets. Both models were built on the Baichuan2-33B architecture and trained on the HH-rlhf and UltraFeedback datasets, with evaluations conducted on the HH-rlhf and MATH datasets. The primary evaluation metric was accuracy, defined as the proportion of instances where the model correctly identified the chosen response as superior to the rejected one.
|
| 258 |
+
|
| 259 |
+
As shown in Figure 5, RM training proved to be significantly more stable, whereas DPO training exhibited notable fluctuations. These findings are consistent with the theoretical results in Section 3.4, which indicate that 3D-properties are absent in RM-based alignment methods. Furthermore, as illustrated in Figure 8 and Figure 9 in Appendix D.2, the DPO model demonstrated a higher tendency to overfit. Specifically, the sharp deceleration in
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
Figure 5: Accuracy of RM and DPO on HH-rlhf eval set over the training process.
|
| 263 |
+
|
| 264 |
+
accuracy improvement after the second epoch suggests that the model was overfitting the training data, highlighting the more aggressive optimization dynamics of DPO.
|
| 265 |
+
|
| 266 |
+
# 4.5 SUBOPTIMALITY OF DPO COMPARED TO RM-BASED ALIGNMENT
|
| 267 |
+
|
| 268 |
+
To further compare the performance of DPO and the end-to-end RM-based alignment (PPO), we tested both approaches on two datasets for poem and slogan generation. These datasets serve as ideal benchmarks for evaluating instruction-following capabilities, given their explicit and structured scoring criteria. For poem creation, the model must generate responses in accordance with specific text and tone formats based on the prompt. Evaluation metrics include five key aspects: Row Number, Words per Row, Rhythm, Tone Pattern, and Title. For slogan creation, evaluation is based on Word Count and Content. Using Baichuan2-33B for our experiments, the results, shown in Table 2, demonstrate that DPO underperforms compared to RLHF-PPO on both datasets.
|
| 269 |
+
|
| 270 |
+
# 5 CONCLUSIONS
|
| 271 |
+
|
| 272 |
+
In this study, we conducted a comprehensive theoretical analysis to elucidate why DPO does not perform as well as RM-based alignment algorithm. The principal challenge identified in DPO is summarized as 3D-properties. We substantiated our theoretical framework through experimental results obtained from both a toy model and real LLMs in practical applications, including mathematical reasoning and instruction following. Additionally, we assessed the effectiveness of specific regularization techniques. Furthermore, by contrasting DPO training with RM training, we highlighted the inherent instability of DPO. We hope this work could offer research directions to narrow the gap between RM-free preference learning methods and RM-based ones. We leave the discussion of limitation to Appendix D.3.
|
| 273 |
+
|
| 274 |
+
# ACKNOWLEDGEMENTS
|
| 275 |
+
|
| 276 |
+
This work was supported by NSF of China (Nos. 92470118, 62306176) and Natural Science Foundation of Shanghai (No. 23ZR1428700).
|
| 277 |
+
|
| 278 |
+
# REFERENCES
|
| 279 |
+
|
| 280 |
+
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
|
| 281 |
+
Afra Amini, Tim Vieira, and Ryan Cotterell. Direct preference optimization with an offset. arXiv preprint arXiv:2402.10571, 2024.
|
| 282 |
+
Anthropic. Introducing claude. https://www.anthropic.com/claude, 2024.
|
| 283 |
+
Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pp. 4447-4455. PMLR, 2024.
|
| 284 |
+
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.
|
| 285 |
+
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
|
| 286 |
+
Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, pp. 324, 1952. doi: 10.2307/2334029. URL http://dx.doi.org/10.2307/2334029.
|
| 287 |
+
Huayu Chen, Guande He, Hang Su, and Jun Zhu. Noise contrastive alignment of language models with explicit rewards. arXiv preprint arXiv:2402.05369, 2024.
|
| 288 |
+
Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. On the weaknesses of reinforcement learning for neural machine translation. arXiv preprint arXiv:1907.01752, 2019.
|
| 289 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240): 1-113, 2023.
|
| 290 |
+
Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, et al. Ultrafeedback: Boosting language models with scaled ai feedback. In *Forty-first International Conference on Machine Learning*, 2024.
|
| 291 |
+
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306, 2024.
|
| 292 |
+
Duanyu Feng, Bowen Qin, Chen Huang, Zheng Zhang, and Wenqiang Lei. Towards analyzing and understanding the limitations of dpo: A theoretical perspective. arXiv preprint arXiv:2404.04626, 2024.
|
| 293 |
+
Lin Gui, Cristina Gárbacea, and Victor Veitch. Bonbon alignment for large language models and the sweetness of best-of-n sampling. arXiv preprint arXiv:2406.00832, 2024.
|
| 294 |
+
Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, et al. Direct language model alignment from online ai feedback. arXiv preprint arXiv:2402.04792, 2024.
|
| 295 |
+
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021.
|
| 296 |
+
Zhenyu Hou, Yiin Niu, Zhengxiao Du, Xiaohan Zhang, Xiao Liu, Aohan Zeng, Qinkai Zheng, Minlie Huang, Hongning Wang, Jie Tang, et al. Chatglm-rlhf: Practices of aligning large language models with human feedback. arXiv preprint arXiv:2404.00934, 2024.
|
| 297 |
+
|
| 298 |
+
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
|
| 299 |
+
Ziniu Li, Tian Xu, Yushun Zhang, Yang Yu, Ruoyu Sun, and Zhi-Quan Luo. Remax: A simple, effective, and efficient method for aligning large language models. arXiv preprint arXiv:2310.10505, 2023.
|
| 300 |
+
Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. arXiv preprint arXiv:2309.06657, 2023.
|
| 301 |
+
Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734, 2024.
|
| 302 |
+
Eric Mitchell. Online experimental results on dpo, 2023. https://wandb.ai/eric_Anthony_mitchell/dpo-demos/runs/og8q3euz?nw=nwusereric_Anthony_mitchell.
|
| 303 |
+
OpenAI. Introducing chatgpt, 2022. https://openai.com/blog/chatgpt, Last accessed on 2023-05-09.
|
| 304 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35: 27730-27744, 2022.
|
| 305 |
+
Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White. Smaug: Fixing failure modes of preference optimisation with dpo-positive. arXiv preprint arXiv:2402.13228, 2024.
|
| 306 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
|
| 307 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
|
| 308 |
+
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021, 2020.
|
| 309 |
+
Yunhao Tang, Daniel Zhaohan Guo, Zeyu Zheng, Daniele Calandriello, Yuan Cao, Eugene Tarassov, Rémi Munos, Bernardo Ávila Pires, Michal Valko, Yong Cheng, et al. Understanding the performance gap between online and offline alignment algorithms. arXiv preprint arXiv:2405.08448, 2024.
|
| 310 |
+
Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
|
| 311 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
|
| 312 |
+
Zhichao Wang, Bin Bi, Shiva Kumar Pentyala, Kiran Ramnath, Sougata Chaudhuri, Shubham Mehrotra, Xiang-Bo Mao, Sitaram Asur, et al. A comprehensive survey of llm alignment techniques: Rlhf, rlaif, ppo, dpo and more. arXiv preprint arXiv:2407.16216, 2024.
|
| 313 |
+
Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: Bridging theory and practice for rlhf under k-l-constraint. In ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023.
|
| 314 |
+
|
| 315 |
+
Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986, 2020.
|
| 316 |
+
Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, and Yi Wu. Is dpo superior to ppo for llm alignment? a comprehensive study. arXiv preprint arXiv:2404.10719, 2024a.
|
| 317 |
+
Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan Li, Xiaohan Zhang, Zihan Wang, Aohan Zeng, Zhengxiao Du, Wenyi Zhao, et al. Chatglm-math: Improving math problem-solving in large language models with a self-critique pipeline. arXiv preprint arXiv:2404.02893, 2024b.
|
| 318 |
+
Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, et al. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305, 2023.
|
| 319 |
+
Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, et al. Advancing llm reasoning generalists with preference trees. arXiv preprint arXiv:2404.02078, 2024.
|
| 320 |
+
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
|
| 321 |
+
Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023.
|
| 322 |
+
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
|
| 323 |
+
|
| 324 |
+
# A DETAILED BACKGROUND AND RELATED WORKS
|
| 325 |
+
|
| 326 |
+
Large language models (LLMs) are profoundly transforming the way we work and live. Performing a three-stage process is the default practice for training LLMs: Pretraining, Supervised Fine-Tuning (SFT), and Reinforcement Learning from Human Feedback (RLHF). The roles of Pretraining and SFT are broadly understood: Pretraining encodes knowledge and SFT aligns question-answer formats. Relatively speaking, the understanding of RLHF is relatively insufficient. Specifically, Direct Preference Optimization (DPO) and its variants, as reward-model-free algorithms, have garnered significant attention due to their elegant mathematical form and relatively low resource requirements (Rafailov et al., 2024; Pal et al., 2024; Guo et al., 2024; Xiong et al., 2023). However, it has also sparked considerable debate because of its unstable performance in practical applications (Li et al., 2023; Xu et al., 2024a).
|
| 327 |
+
|
| 328 |
+
# A.1 ON-POLICY ALIGNMENT VS. OFF-POLICY ALIGNMENT
|
| 329 |
+
|
| 330 |
+
The key inspiration for the DPO algorithm (Rafailov et al., 2024) is a closed-form solution to the RL step in RLHF, and thus an equivalent solution to the optimal policy for RLHF objective. The original DPO work is an off-policy learning algorithm for it relies on an extra preference dataset (Helpful-and-Harmless (Bai et al., 2022)), where the preference pairs are not generated by the policy LLM itself. On the other hand, there are a bunch of on-policy learning algorithms developed, where the preference responses are sampled from the policy model. Guo et al. (2024) proposed the on-policy version of DPO. In on-policy DPO, all responses are sampled in a batch-wise way. A natural tradeoff between them is the iterative DPO introduced by Xiong et al. (2023); Xu et al. (2024a). The algorithm begins by initializing with an additional preference dataset, then iteratively trains a policy using DPO, collects response pairs through exploration policies, obtains preference signals from human or AI labelers, and updates the dataset with the newly labeled data.
|
| 331 |
+
|
| 332 |
+
# A.2 INSIGHTS INTO DPO
|
| 333 |
+
|
| 334 |
+
Though the RM-free algorithms are favored due to their lower computational overhead, if they can achieve on-performance with state-of-art RM-based methods such as RLHF-PPO sparked a lot of discussions. Liu et al. (2023) proves that the absence of RM in DPO constrains its ability to sample preference pairs from the optimal policy. Xu et al. (2024a) show that DPO may have fundamental limitations that its optimal solution is a superset of the optimal solution of the PPO algorithm. This work also reports the empirical results that the performance of DPO is affected by the distribution shift between the model outputs and the preference dataset. Feng et al. (2024) discusses the limitations of DPO from the perspective of gradient numerical stability, and conducted experiments to preliminarily verify it. However, they did not conduct experiments on real LLM and illustrate the correlation.
|
| 335 |
+
|
| 336 |
+
# A.3 OTHER RM-FREE ALIGNMENT ALGORITHMS
|
| 337 |
+
|
| 338 |
+
A major limitation of the DPO objective is its reliance on the Bradley-Terry model to convert pairwise preferences into point-wise rewards. To overcome this, Azar et al. (2024) introduced $\Phi$ -preference optimization ( $\Phi \mathrm{PO}$ ), where DPO is a special case of it that $\Phi(P) = \log \frac{P}{1 - P}$ . Identity-preference optimization (IPO) is a variant that replaces the $\Phi$ -function by an identity mapping function $\Phi(P) = P$ .
|
| 339 |
+
|
| 340 |
+
Different from the DPO or IPO, the core idea of Sequence Likelihood Calibration (SLiC) (Zhao et al., 2023) is to calibrate the likelihood of ranked sequences sampled from the policy being trained. The SLiC loss function can be decomposed into two parts: the rank function to guarantee that the difference between $\log \pi_{\theta}(a^{+}|x)$ and $\log \pi_{\theta}(a^{-}|x)$ is greater than $\delta$ under the current policy $\pi_{\theta}$ , and the cross-entropy regularizer that to encourage the model to stay close to the SFT model.
|
| 341 |
+
|
| 342 |
+
There are some other variants that tries to improve DPO, such as KTO (Ethayarajh et al., 2024), NCA (Chen et al., 2024), ODPO (DPO with an offset) (Amini et al., 2024). KTO uses a Kahneman-Tversky model of human utility and proposes a method that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences. NCA leverages Noise Contrastive Estimation (NCE) to bridge the gap in handling reward datasets explicitly annotated with scalar evaluations. ODPO does not treat every preference pair equally during fine-tuning and requires the difference between the likelihood of the preferred and dispreferred response to be greater than an offset value.
|
| 343 |
+
|
| 344 |
+
# B THEORETICAL FOUNDATIONS
|
| 345 |
+
|
| 346 |
+
# B.1 FUNDAMENTAL LIMITATION IN VANILLA DPO
|
| 347 |
+
|
| 348 |
+
Here we revisit the theoretical findings in Section 3.1 in detail. The loss function for vanilla DPO is given by
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\ell^{\mathrm{DPO}}(\theta) = \sum_{(x,a^{+},a^{-})}\log \left(1 + \left(\frac{\pi_{0}(a^{+}|x)}{\pi_{0}(a^{-}|x)}\frac{\pi_{\theta}(a^{-}|x)}{\pi_{\theta}(a^{+}|x)}\right)^{\beta}\right).
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
For a given $(x, a^{+}, a^{-})$ , let
|
| 355 |
+
|
| 356 |
+
$$
|
| 357 |
+
\alpha := \left(\frac {\pi_ {0} (a ^ {+} | x)}{\pi_ {0} (a ^ {-} | x)}\right) ^ {\beta}, \quad \pi^ {+} := \pi_ {\theta} (a ^ {+} | x), \quad \pi^ {-} := \pi_ {\theta} (a ^ {-} | x), \quad z := \frac {\pi_ {\theta} (a ^ {-} | x)}{\pi_ {\theta} (a ^ {+} | x)}.
|
| 358 |
+
$$
|
| 359 |
+
|
| 360 |
+
Then we have
|
| 361 |
+
|
| 362 |
+
$$
|
| 363 |
+
\frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {+}} = \frac {\partial \log (1 + \alpha z ^ {\beta})}{\partial z} \frac {\partial z}{\partial \pi^ {+}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \frac {\partial z}{\partial \pi^ {+}},
|
| 364 |
+
$$
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
\frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {-}} = \frac {\partial \log (1 + \alpha z ^ {\beta})}{\partial z} \frac {\partial z}{\partial \pi^ {-}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \frac {\partial z}{\partial \pi^ {-}}.
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
Considering the case when $\pi^{-} \to 0$ , we get $(\alpha \beta) / (1 + \alpha z^{\beta}) \to \alpha \beta$ , thus,
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
\frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {+}} \rightarrow - \alpha \beta (\pi^ {+}) ^ {- \beta - 1} (\pi^ {-}) ^ {\beta}, \quad \frac {\partial \ell^ {\mathrm {D P O}}}{\partial \pi^ {-}} \rightarrow \alpha \beta (\pi^ {+}) ^ {- \beta} (\pi^ {-}) ^ {\beta - 1}.
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
As $\pi^{-} \to 0$ , since $\beta < 1$ , $\frac{\partial \ell^{\mathrm{DPO}}}{\partial \pi^{+}}$ is proportional to $(\pi^{-})^{\beta}$ and tends to 0, while $\frac{\partial \ell^{\mathrm{DPO}}}{\partial \pi^{-}}$ is proportional to $(\pi^{-})^{\beta - 1}$ and tends to infinity. Therefore, in this case, the gradient for the rejected action becomes extremely large, while the gradient for the chosen action becomes very small.
|
| 377 |
+
|
| 378 |
+
Then we want to further explore the token-level gradient. Here, $\pi^{+}$ and $\pi^{-}$ are the likelihood of the sequences. Let $\pi_{i}^{+}$ be the current selection probability of the $i$ -th token for the chosen sequence, and let $\pi_{i}^{-}$ be the current selection probability of the $i$ -th token for the rejected sequence:
|
| 379 |
+
|
| 380 |
+
$$
|
| 381 |
+
\pi^ {+} = \prod_ {i} \pi_ {i} ^ {+} = \pi_ {- i} ^ {+} \cdot \pi_ {i} ^ {+}, \quad \pi^ {-} = \prod_ {i} \pi_ {i} ^ {-} = \pi_ {- i} ^ {-} \cdot \pi_ {i} ^ {-},
|
| 382 |
+
$$
|
| 383 |
+
|
| 384 |
+
where we have $\frac{\partial\pi}{\partial\pi_i} = \pi_{-i}$ . Consider a softmax function, $s_i = \frac{e^{z_i}}{\sum_j e^{z_j}}$ , the corresponding gradients are
|
| 385 |
+
|
| 386 |
+
$$
|
| 387 |
+
\frac {\partial s _ {i}}{\partial z _ {i}} = s _ {i} (1 - s _ {i}), \quad \frac {\partial s _ {j}}{\partial z _ {i}} = - s _ {i} s _ {j}, \quad i \neq j.
|
| 388 |
+
$$
|
| 389 |
+
|
| 390 |
+
Let
|
| 391 |
+
|
| 392 |
+
$$
|
| 393 |
+
C (\pi^ {+}, \pi^ {-}) := \alpha \beta^ {+} (\pi^ {+}) ^ {- \beta^ {+}} (\pi^ {-}) ^ {\beta^ {-}},
|
| 394 |
+
$$
|
| 395 |
+
|
| 396 |
+
then, consider the current selection probability $\pi_i^+$ of the $i$ -th token for the chosen sequence, let the sampled token's index be $c$ . The logit corresponding to this token $c$ is denoted as $x_{i,c}^{+}$ , then we have
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
\frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial x _ {i , c} ^ {+}} = \frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {+}} \frac {\partial \pi^ {+}}{\partial \pi_ {i} ^ {+}} \frac {\partial \pi_ {i} ^ {+}}{\partial x _ {i , c} ^ {+}} \rightarrow - C (\pi^ {+}, \pi^ {-}) (1 - x _ {i, c} ^ {+}),
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
if $c' \neq c$ , we have
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial x _ {i , c ^ {\prime}} ^ {+}} = \frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {+}} \frac {\partial \pi^ {+}}{\partial \pi_ {i} ^ {+}} \frac {\partial \pi_ {i} ^ {+}}{\partial x _ {i , c ^ {\prime}} ^ {+}} \rightarrow C (\pi^ {+}, \pi^ {-}) x _ {i, c ^ {\prime}} ^ {+}.
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
Similarly, consider the current selection probability $\pi_i^-$ of the $i$ -th token for the rejected sequence, let the sampled token's index be $c$ . The logit corresponding to this token $c$ is denoted as $x_{i,c}^{-}$ , then we have
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
\frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial x _ {i , c} ^ {-}} = \frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {-}} \frac {\partial \pi^ {-}}{\partial \pi_ {i} ^ {-}} \frac {\partial \pi_ {i} ^ {-}}{\partial x _ {i , c} ^ {-}} \to C (\pi^ {+}, \pi^ {-}) (1 - x _ {i, c} ^ {-}),
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
if $c' \neq c$ , we have
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
\frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial x _ {i , c ^ {\prime}} ^ {-}} = \frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {-}} \frac {\partial \pi^ {-}}{\partial \pi_ {i} ^ {-}} \frac {\partial \pi_ {i} ^ {-}}{\partial x _ {i , c ^ {\prime}} ^ {-}} \rightarrow - C (\pi^ {+}, \pi^ {-}) x _ {i, c ^ {\prime}} ^ {-}.
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
We can see that the token-level gradients from the chosen response and the rejected response are at the same scale level. This reflects that DPO may not cause gradient numerical instability in the generation of a single token. However, if the impact of the algorithm on the state transition probability generated by autoregression is comprehensively considered, 3D-properties will still affect the performance of the algorithm.
|
| 421 |
+
|
| 422 |
+
# B.2 ANALYSIS ON REGULARIZATION TECHNIQUES
|
| 423 |
+
|
| 424 |
+
In Section 3.3, we propose two straightforward regularization techniques. Here we provide theoretical analysis to see why they can mitigate 3D-properties.
|
| 425 |
+
|
| 426 |
+
# B.2.1 FLEXIBLE $\beta$ -DPO
|
| 427 |
+
|
| 428 |
+
The first technique employs variable values of $\beta$ to control the rate at which the likelihood of rejected responses declines. Consider using different $\beta^{+}$ and $\beta^{-}$ for the chosen and rejected responses:
|
| 429 |
+
|
| 430 |
+
$$
|
| 431 |
+
\ell^ {\mathrm {f l e x - D P O}} (\theta) = - \sum_ {(x, a ^ {+}, a ^ {-})} \log \sigma \left[ \beta^ {+} \log \frac {\pi_ {\theta} (a ^ {+} | x)}{\pi_ {0} (a ^ {+} | x)} - \beta^ {-} \log \frac {\pi_ {\theta} (a ^ {-} | x)}{\pi_ {0} (a ^ {-} | x)} \right].
|
| 432 |
+
$$
|
| 433 |
+
|
| 434 |
+
The loss function can be re-written by:
|
| 435 |
+
|
| 436 |
+
$$
|
| 437 |
+
\ell^ {\mathrm {f l e x - D P O}} (\theta) = \sum_ {(x, a ^ {+}, a ^ {-})} \log \left(1 + \left(\frac {\pi_ {0} (a ^ {+} | x)}{\pi_ {\theta} (a ^ {+} | x)}\right) ^ {\beta^ {+}} \left(\frac {\pi_ {\theta} (a ^ {-} | x)}{\pi_ {0} (a ^ {-} | x)}\right) ^ {\beta^ {-}}\right).
|
| 438 |
+
$$
|
| 439 |
+
|
| 440 |
+
For a given $(x, a^{+}, a^{-})$ , let
|
| 441 |
+
|
| 442 |
+
$$
|
| 443 |
+
\alpha := \frac {\pi_ {0} (a ^ {+} | x) ^ {\beta^ {+}}}{\pi_ {0} (a ^ {-} | x) ^ {\beta^ {-}}}, \quad \pi^ {+} := \pi (a ^ {+} | x), \quad \pi^ {-} := \pi (a ^ {-} | x), \quad z := \frac {\pi_ {\theta} (a ^ {-} | x) ^ {\beta^ {-}}}{\pi_ {\theta} (a ^ {+} | x) ^ {\beta^ {+}}}.
|
| 444 |
+
$$
|
| 445 |
+
|
| 446 |
+
Then we have
|
| 447 |
+
|
| 448 |
+
$$
|
| 449 |
+
\frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {+}} = \frac {\partial \log (1 + \alpha z)}{\partial z} \frac {\partial z}{\partial \pi^ {+}} = \frac {\alpha}{1 + \alpha z} \frac {\partial z}{\partial \pi^ {+}},
|
| 450 |
+
$$
|
| 451 |
+
|
| 452 |
+
$$
|
| 453 |
+
\frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {-}} = \frac {\partial \log (1 + \alpha z)}{\partial z} \frac {\partial z}{\partial \pi^ {-}} = \frac {\alpha}{1 + \alpha z} \frac {\partial z}{\partial \pi^ {-}}.
|
| 454 |
+
$$
|
| 455 |
+
|
| 456 |
+
Considering the case when $\pi^{-} \to 0$ , we get $\alpha / (1 + \alpha z) \to \alpha$ , thus
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
\frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {+}} \rightarrow - \alpha \beta^ {+} (\pi^ {+}) ^ {- \beta^ {+} - 1} (\pi^ {-}) ^ {\beta^ {-}}, \quad \frac {\partial \ell^ {\mathrm {D P O} ^ {\prime}}}{\partial \pi^ {-}} \rightarrow \alpha \beta^ {-} (\pi^ {+}) ^ {- \beta^ {+}} (\pi^ {-}) ^ {\beta^ {-} - 1}.
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
As shown by the expressions for the gradients of the DPO' objective with respect to the chosen and rejected response likelihoods, the magnitude of the gradients is controlled by the parameters $\beta^{+}$ and $\beta^{-}$ . In particular, increasing $\beta^{+}$ strengthens the gradient for the chosen response, $\pi^{+}$ , while reducing $\beta^{-}$ dampens the gradient for the rejected response, $\pi^{-}$ , as it approaches zero. This gradient behavior suggests that adjusting these parameters can effectively mitigate the 3D-properties discussed in Section 3. Specifically, a large $\beta^{+}$ ensures that the likelihood of the chosen responses remains sufficiently reinforced, while a small $\beta^{-}$ prevents the likelihood of rejected responses from decreasing too rapidly, which would otherwise lead to the instability and degradation described earlier.
|
| 463 |
+
|
| 464 |
+
By fine-tuning $\beta^{+}$ and $\beta^{-}$ , it becomes possible to control the interaction between the gradients of the chosen and rejected responses, reducing the drastic drop in rejected response likelihood, the degradation into response suppression, and the dispersion effect on unseen responses. This strategy thus offers a potential solution for improving the stability and performance of DPO by reducing the severity of the 3D-properties.
|
| 465 |
+
|
| 466 |
+
# B.2.2 SFT LOSS REGULARIZATION
|
| 467 |
+
|
| 468 |
+
The second technique involves augmenting the DPO loss with an SFT loss, a strategy that has been shown to significantly enhance the stability of DPO in previous studies. We can rewrite the loss function:
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\ell^ {\mathrm {S F T - D P O}} (\theta) = - \sum_ {(x, a ^ {+}, a ^ {-})} \left\{\log \sigma \left[ \beta \log \frac {\pi_ {\theta} (a ^ {+} | x)}{\pi_ {0} (a ^ {+} | x)} - \beta \log \frac {\pi_ {\theta} (a ^ {-} | x)}{\pi_ {0} (a ^ {-} | x)} \right] - \gamma \log \pi_ {\theta} (a ^ {+} | x) \right\} \tag {6}
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
Similarly, we have,
|
| 475 |
+
|
| 476 |
+
$$
|
| 477 |
+
\frac {\partial \ell^ {\mathrm {S F T - D P O}}}{\partial \pi^ {+}} = \frac {\partial \log (1 + \alpha z ^ {\beta})}{\partial z} \frac {\partial z}{\partial \pi^ {+}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \frac {\partial z}{\partial \pi^ {+}} - \gamma \frac {1}{\pi^ {+}},
|
| 478 |
+
$$
|
| 479 |
+
|
| 480 |
+
$$
|
| 481 |
+
\frac {\partial \ell^ {\mathrm {S F T - D P O}}}{\partial \pi^ {-}} = \frac {\partial \log (1 + \alpha z ^ {\beta})}{\partial z} \frac {\partial z}{\partial \pi^ {-}} = \frac {\alpha \beta}{1 + \alpha z ^ {\beta}} z ^ {\beta - 1} \frac {\partial z}{\partial \pi^ {-}}.
|
| 482 |
+
$$
|
| 483 |
+
|
| 484 |
+
As $\pi^{-} \to 0$ , the gradient for the chosen action $\frac{\partial \ell^{\mathrm{SFT-DPO}}}{\partial \pi^{+}} \to -\gamma \frac{1}{\pi^{+}} \neq 0$ , meaning that the likelihood of the chosen responses can continue to be optimized. This behavior is significant, as it indicates that even as the likelihood of rejected responses $\pi^{-}$ approaches zero, the chosen response $\pi^{+}$ can still be improved. This is a key advantage of SFT-DPO over the vanilla DPO, which suffers from gradient vanishing for $\pi^{+}$ when $\pi^{-} \to 0$ . The negative impact of 3D-properties is thus reduced, allowing for more stable and effective optimization in the long run.
|
| 485 |
+
|
| 486 |
+
# B.3 OTHER INVARIANTS OF DPO
|
| 487 |
+
|
| 488 |
+
# B.3.1 IDENTITY-PREFERENCE OPTIMIZATION (IPO)
|
| 489 |
+
|
| 490 |
+
In IPO, the loss function can be written by,
|
| 491 |
+
|
| 492 |
+
$$
|
| 493 |
+
\ell^ {\mathrm {I P O}} (\theta) = \sum_ {(x, a ^ {+}, a ^ {-})} \left[ \log \left[ \frac {\pi_ {\theta} (a ^ {+} | x) \pi_ {0} (a ^ {-} | x)}{\pi_ {\theta} (a ^ {-} | x) \pi_ {0} (a ^ {+} | x)} \right] - \frac {1}{2 \eta} \right] ^ {2} \tag {7}
|
| 494 |
+
$$
|
| 495 |
+
|
| 496 |
+
We directly give the gradients:
|
| 497 |
+
|
| 498 |
+
$$
|
| 499 |
+
\frac {\partial \ell^ {\mathrm {I P O}}}{\partial \pi_ {\theta} (a ^ {-} | x)} = - 2 \left[ \log \left[ \frac {\pi_ {\theta} (a ^ {+} | x) \pi_ {0} (a ^ {-} | x)}{\pi_ {\theta} (a ^ {-} | x) \pi_ {0} (a ^ {+} | x)} \right] - \frac {1}{2 \eta} \right] \cdot \frac {1}{\pi_ {\theta} (a ^ {-} | x)} \tag {8}
|
| 500 |
+
$$
|
| 501 |
+
|
| 502 |
+
$$
|
| 503 |
+
\frac {\partial \ell^ {\mathrm {I P O}}}{\partial \pi_ {\theta} \left(a ^ {+} | x\right)} = 2 \left[ \log \left[ \frac {\pi_ {\theta} \left(a ^ {+} | x\right) \pi_ {0} \left(a ^ {-} | x\right)}{\pi_ {\theta} \left(a ^ {-} | x\right) \pi_ {0} \left(a ^ {+} | x\right)} \right] - \frac {1}{2 \eta} \right] \cdot \frac {1}{\pi_ {\theta} \left(a ^ {+} | x\right)} \tag {9}
|
| 504 |
+
$$
|
| 505 |
+
|
| 506 |
+
# B.3.2 SEQUENCE LIKELIHOOD CALIBRATION (SLIC)
|
| 507 |
+
|
| 508 |
+
In SLiC, the loss function can be written by,
|
| 509 |
+
|
| 510 |
+
$$
|
| 511 |
+
\ell^ {\mathrm {S L i C}} (\theta) = \sum_ {x, a ^ {+}, a ^ {-}} \max \left[ 0, \delta - \log \pi_ {\theta} \left(a ^ {+} | x\right) + \log \pi_ {\theta} \left(a ^ {-} | x\right) \right] - \eta \cdot \log \pi_ {\theta} \left(a ^ {+} | x\right) \tag {10}
|
| 512 |
+
$$
|
| 513 |
+
|
| 514 |
+
if $\delta >\log \frac{\pi_{\theta}(a^{+}|x)}{\pi_{\theta}(a^{-}|x)}$
|
| 515 |
+
|
| 516 |
+
$$
|
| 517 |
+
\frac {\partial \ell^ {\mathrm {S L i C}}}{\partial \pi_ {\theta} \left(a ^ {+} | x\right)} = - \frac {1 + \eta}{\pi_ {\theta} \left(a ^ {+} | x\right)}, \quad \frac {\partial \ell^ {\mathrm {S L i C}}}{\partial \pi_ {\theta} \left(a ^ {-} | x\right)} = \frac {1}{\pi_ {\theta} \left(a ^ {-} | x\right)} \tag {11}
|
| 518 |
+
$$
|
| 519 |
+
|
| 520 |
+
else:
|
| 521 |
+
|
| 522 |
+
$$
|
| 523 |
+
\frac {\partial \ell^ {\mathrm {S L i C}}}{\partial \pi_ {\theta} \left(a ^ {+} | x\right)} = - \frac {\eta}{\pi_ {\theta} \left(a ^ {+} | x\right)}, \quad \frac {\partial \ell^ {\mathrm {S L i C}}}{\partial \pi_ {\theta} \left(a ^ {-} | x\right)} = 0 \tag {12}
|
| 524 |
+
$$
|
| 525 |
+
|
| 526 |
+
# B.3.3 SIMPLE PREFERENCE OPTIMIZATION (SIMPO)
|
| 527 |
+
|
| 528 |
+
In SimPO, the loss function is written by,
|
| 529 |
+
|
| 530 |
+
$$
|
| 531 |
+
\ell^ {\text {S i m P O}} (\theta) = - \sum_ {x, a ^ {+}, a ^ {-}} \left[ \log \sigma \left(\frac {\beta}{| a ^ {+} |} \log \pi_ {\theta} (a ^ {+} | x) - \frac {\beta}{| a ^ {-} |} \log \pi_ {\theta} (a ^ {-} | x) - \gamma\right) \right], \tag {13}
|
| 532 |
+
$$
|
| 533 |
+
|
| 534 |
+
where $|\cdot|$ represents the length of the generated response and $\gamma$ is a target reward margin term. Similarly, we directly give the gradients:
|
| 535 |
+
|
| 536 |
+
$$
|
| 537 |
+
\left. \frac {\partial \ell^ {\operatorname {S i m P O}}}{\partial \pi_ {\theta} \left(a ^ {+} | x\right)} = - \frac {\beta}{| a ^ {+} | \cdot \pi_ {\theta} \left(a ^ {+} | x\right)} \sigma \left(- \left(\frac {\beta}{| a ^ {+} |} \log \pi_ {\theta} \left(a ^ {+} | x\right) - \frac {\beta}{| a ^ {-} |} \log \pi_ {\theta} \left(a ^ {-} | x\right) - \gamma\right)\right), \right. \tag {14}
|
| 538 |
+
$$
|
| 539 |
+
|
| 540 |
+
$$
|
| 541 |
+
\left. \right. \frac {\partial \ell^ {\operatorname {S i m P O}}}{\partial \pi_ {\theta} \left(a ^ {-} | x\right)} = \frac {\beta}{| a ^ {-} | \cdot \pi_ {\theta} \left(a ^ {-} | x\right)} \sigma \left(- \left(\frac {\beta}{| a ^ {+} |} \log \pi_ {\theta} \left(a ^ {+} | x\right) - \frac {\beta}{| a ^ {-} |} \log \pi_ {\theta} \left(a ^ {-} | x\right) - \gamma\right)\right). \tag {15}
|
| 542 |
+
$$
|
| 543 |
+
|
| 544 |
+
The ratio of the gradient is
|
| 545 |
+
|
| 546 |
+
$$
|
| 547 |
+
\frac {\partial \ell^ {\text {S i m P O}}}{\partial \pi^ {-}} / \frac {\partial \ell^ {\text {S i m P O}}}{\partial \pi^ {+}} = - \frac {| a ^ {+} | \cdot \pi_ {\theta} (a ^ {+} | x)}{| a ^ {-} | \cdot \pi_ {\theta} (a ^ {-} | x)}, \tag {16}
|
| 548 |
+
$$
|
| 549 |
+
|
| 550 |
+
which indicates that as $\pi^{+}$ increases and $\pi^{-}$ decreases, the gradient with respect to $\pi^{-}$ grows faster and leads to a rapid drop in the likelihood of the rejected response. However, different from DPO, as $\pi^{-} \to 0$ , we have $\partial \ell^{\mathrm{SimPO}} / \partial \pi^{+} \to 0$ . As for $\partial \ell^{\mathrm{SimPO}} / \partial \pi^{-}$ , the analysis is non-trivial. We conclude it as a lemma.
|
| 551 |
+
|
| 552 |
+
Lemma 1. As $\pi_{\theta}(a^{-}|x)\to 0$ , the limit of the partial derivative regarding $\pi_{\theta}(a^{-}|x)$ in SimPO is:
|
| 553 |
+
|
| 554 |
+
$$
|
| 555 |
+
\lim _ {\pi_ {\theta} \left(a ^ {-} | x\right)\rightarrow 0} \frac {\partial \ell^ {\text {S i m P O}}}{\partial \pi_ {\theta} \left(a ^ {-} | x\right)} = \left\{\begin{array}{l l}0,&\text {i f} \beta > \left| a ^ {-} \right|,\\+ \infty ,&\text {i f} \beta < \left| a ^ {-} \right|,\\\frac {\beta C}{\left| a ^ {-} \right|},&\text {i f} \beta = \left| a ^ {-} \right|,\end{array}\right. \tag {17}
|
| 556 |
+
$$
|
| 557 |
+
|
| 558 |
+
where $C = e^{-\frac{\beta}{|a^{+}|}\log \pi_{\theta}(a^{+}|x) + \gamma}$ is a constant independent of $\pi_{\theta}(a^{-}|x)$ .
|
| 559 |
+
|
| 560 |
+
Table 3: Data source of the responses in 4 scenarios.
|
| 561 |
+
|
| 562 |
+
<table><tr><td></td><td>chosen responses source</td><td>rejected responses source</td></tr><tr><td>Scenario 1</td><td>Baichuan2-33B</td><td>Baichuan2-33B</td></tr><tr><td>Scenario 2</td><td>Solutions from dataset</td><td>Baichuan2-33B</td></tr><tr><td>Scenario 3</td><td>Baichuan2-33B</td><td>Qwen-7B</td></tr><tr><td>Scenario 4</td><td>Solutions from dataset</td><td>Qwen-7B</td></tr></table>
|
| 563 |
+
|
| 564 |
+
Proof. Let $\pi^{-} = \pi_{\theta}(a^{-}|x)$ and $\pi^{+} = \pi_{\theta}(a^{+}|x)$ . The partial derivative becomes:
|
| 565 |
+
|
| 566 |
+
$$
|
| 567 |
+
f (\pi^ {-}) = \frac {\beta}{| a ^ {-} | \cdot \pi^ {-}} \cdot \sigma (- z),
|
| 568 |
+
$$
|
| 569 |
+
|
| 570 |
+
where $z = \frac{\beta}{|a^{+}|}\log \pi^{+} - \frac{\beta}{|a^{-}|}\log \pi^{-} - \gamma$
|
| 571 |
+
|
| 572 |
+
As $\pi^{-} \to 0$ , $\log \pi^{-} \to -\infty$ , thus $z \to +\infty$ . The sigmoid function approximates to:
|
| 573 |
+
|
| 574 |
+
$$
|
| 575 |
+
\sigma (- z) \approx e ^ {- z} = C \cdot (\pi^ {-}) ^ {\frac {\beta}{| a ^ {-} |}},
|
| 576 |
+
$$
|
| 577 |
+
|
| 578 |
+
where $C = e^{-\frac{\beta}{|a^{+}|}\log \pi^{+} + \gamma}$ . By substituting back, we have:
|
| 579 |
+
|
| 580 |
+
$$
|
| 581 |
+
f (\pi^ {-}) \approx \frac {\beta C}{| a ^ {-} |} \cdot (\pi^ {-}) ^ {\frac {\beta}{| a ^ {-} |} - 1}.
|
| 582 |
+
$$
|
| 583 |
+
|
| 584 |
+
Taking the limit as $\pi^{-}\to 0$
|
| 585 |
+
|
| 586 |
+
$$
|
| 587 |
+
\lim _ {\pi^ {-} \to 0} f (\pi^ {-}) = \left\{ \begin{array}{l l} 0, & \text {i f} \frac {\beta}{| a ^ {-} |} - 1 > 0 (\beta > | a ^ {-} |), \\ + \infty , & \text {i f} \frac {\beta}{| a ^ {-} |} - 1 < 0 (\beta < | a ^ {-} |), \\ \frac {\beta C}{| a ^ {-} |}, & \text {i f} \frac {\beta}{| a ^ {-} |} - 1 = 0 (\beta = | a ^ {-} |). \end{array} \right.
|
| 588 |
+
$$
|
| 589 |
+
|
| 590 |
+

|
| 591 |
+
|
| 592 |
+
Note that in practice, $\beta$ is normally chosen less than $|a^{-}|$ , which means the drastic drop in rejected response will happen and 3D-properties still occur.
|
| 593 |
+
|
| 594 |
+
Remark 1. These variants all alleviate the 3D-properties problem of DPO, so the results on mathematical reasoning are partly improved (see Table 9). The invariants we tested do not perform well uniformly on all the tasks. For example, on instruction following tasks like poem generation, vanilla DPO outperforms SLiC and IPO. One hypothesis is their solution forms diverge significantly from the Bradley-Terry model, leading to a loss of generalization in preference learning.
|
| 595 |
+
|
| 596 |
+
# C DATASET DESCRIPTION
|
| 597 |
+
|
| 598 |
+
Table 3 shows the data source for the LLM experiments in Section 4 in 4 scenarios. For example, in Scenario 1, the chosen and the rejected responses are both sampled from Baichuan2-33B and can be regarded as on-policy learning. In Scenario 4, the chosen responses are exactly the solutions given in the datasets while the rejected responses are sampled from another different LLM: Qwen7B. In the experiments regarding Baichuan2-13B, we use the same data rather than re-sample the on-policy chosen responses. There are two reasons: 1. The 13B model is not as strong as the 33B model, therefore we can not sample enough high-quality responses as the chosen ones. 2. Models in the Baichuan2-series are all using the same dataset in Pretraining and SFT, therefore we can approximately think that their outputs are identically distributed. The log probability for the base model in Table 7 confirms this fact.
|
| 599 |
+
|
| 600 |
+
Table 4 describes the evaluation criteria of the responses to the math questions. A score of 5 means both the process and the result are correct, and a score of 4 or 5 means the answer is correct. We use these two indicators to evaluate the mathematical reasoning ability of the model. GPT-4 is used as the AI evaluator. We provide the evaluation prompt in our code in the supplementary material.
|
| 601 |
+
|
| 602 |
+
Table 4: Evaluation criteria of the responses to the math questions.
|
| 603 |
+
|
| 604 |
+
<table><tr><td>5 points</td><td>Full-score answer, requiring a correct response with the correct process, considering all possibilities, and being comprehensive.</td></tr><tr><td>4 points</td><td>For complex questions, the answer is correct but lacks the process; for simple questions, the answer is correct but accompanied by a very redundant and verbose reasoning process.</td></tr><tr><td>3 points</td><td>The answer is incorrect, but most of the process is correct, or the answer is correct, but there are obvious errors in the process.</td></tr><tr><td>2 points</td><td>The answer is incorrect, and most of the process is incorrect.</td></tr><tr><td>1 point</td><td>The answer and the entire process and thought process are incorrect, or the answer doesn’t process to final result.</td></tr></table>
|
| 605 |
+
|
| 606 |
+
Table 5: Poem dataset test set.
|
| 607 |
+
|
| 608 |
+
<table><tr><td>Poem type</td><td>Quatrain</td><td>Song Ci</td><td>Ancient Poetry</td><td>Metical poetry</td><td>Modern poetry</td></tr><tr><td>Test set Count</td><td>138</td><td>518</td><td>93</td><td>173</td><td>85</td></tr></table>
|
| 609 |
+
|
| 610 |
+
Table 5 shows the types of different poems in the poem dataset we used and their corresponding numbers in the test set. The language is all Chinese. Chinese poetry has strict format and rhyme requirements depending on the type. For example, for the quatrains, the row number must be 4, the number of words per row must be 5 or 7. The second and fourth sentences in the quatrain are required to rhyme, that is, the words at the end of the second and fourth sentences need to follow the prescribed tone. In this manner, we design a rule-based evaluation system to score each dimension of the generated answers. We selected the following characteristics as the basis for our evaluation:
|
| 611 |
+
|
| 612 |
+
- Row Number: For quatrain, the row number must be 4. For metrical poetry, the row number must be 8.
|
| 613 |
+
- Words per Row: For quatrain and metrical poetry, the number of words per row must be 8.
|
| 614 |
+
- Rhythm: Every type of poetry has a certain rhyme pattern requirement. Since it is a bit complicated to describe case by case, we put the requirements in the form of code in the supplementary material.
|
| 615 |
+
- Tone Pattern: For Song Ci, the tone pattern depends on the brand name.
|
| 616 |
+
- Title: Determined by the requirement in the prompt.
|
| 617 |
+
|
| 618 |
+
For the Slogan dataset, we evaluate the model's performance based on whether it meets the word count requirements (Word Count) and the quality of the content (Content). We also provide the scoring and evaluation rule-based criteria in our code in the supplementary material.
|
| 619 |
+
|
| 620 |
+
Table 6 shows the amount of data in each dataset. MATH, SuperCLUE, UltraFeedback and HH-rlfh are open-source datasets, while Poem, Slogan are in-house self-built datasets. We provide part of the in-house datasets in the supplementary material to clarify the format and the content. SuperCLUE is only for cross-dataset testing, and COMMON is only used for the comparison between the RM training and DPO training in Section 4.4.
|
| 621 |
+
|
| 622 |
+
# D EXPERIMENTS DETAILS
|
| 623 |
+
|
| 624 |
+
# D.1 TRAINING SETTING
|
| 625 |
+
|
| 626 |
+
In this section, we provide a detailed overview of our training settings. Following the implementation of Rafailov et al. (2024), we use the Adam optimizer with the learning rate set to 5e-7. The most sensitive parameter in the DPO algorithm is $\beta$ (and learning rate but less significant). Here we use the default setting aligned with the original DPO paper, The $\beta$ here is set to be 0.1 and the learning
|
| 627 |
+
|
| 628 |
+
Table 6: The statistic of used datasets.
|
| 629 |
+
|
| 630 |
+
<table><tr><td></td><td>MATH*</td><td>SuperCLUE</td><td>Poem</td><td>Slogan</td><td>UltraFeedback</td><td>HH-RLHF</td></tr><tr><td>train set</td><td>5,826</td><td>-</td><td>93,269</td><td>13,592</td><td>170,000</td><td>336,820</td></tr><tr><td>test set</td><td>2,000</td><td>1,072</td><td>1,000</td><td>1,000</td><td>-</td><td>-</td></tr></table>
|
| 631 |
+
|
| 632 |
+
rate is set to be $5 \times 10^{-6}$ , which is the best hyperparameter set as far as we explored. We set the batch size to 80 and the number of gradient accumulation steps to 2. The training epoch was set to 1. In IPO training, we set $\eta$ to be 0.1. In SLiC training, we set $\delta = 5$ , $\eta = 0.1$ . All experiments were conducted on a cluster consisting of 40 A100 GPUs.
|
| 633 |
+
|
| 634 |
+
# D.2 SUPPLEMENTARY EXPERIMENTAL RESULTS
|
| 635 |
+
|
| 636 |
+
Table 7 reports the log probabilities before and after training. Scenario 1 exhibits the slowest decline in likelihood compared to other scenarios, effectively mitigating the adverse effects of 3D-properties.
|
| 637 |
+
|
| 638 |
+
Table 8 show the performance enhancement in MATH and SuperCLUE by vanilla DPO training respectively. It is easy to see that Scenario 1 where both the chosen responses and the rejected responses are on-policy performs best.
|
| 639 |
+
|
| 640 |
+
Figure 4 and Table 9 represent the additional results on DPO variants and regularization techniques. It can be seen that DPO variants can achieve on-par or better performance compared with vanilla DPO. Figure 4 shows that as $\beta^{-}$ decreases, the performance in poem generation initially improves, reaches the peak point at around $\beta^{-} = 0.08$ , and subsequently declines. The initial improvement is intuitive. We conjecture that an excessively low $\beta^{-}$ may cause DPO to perform like SFT on the chosen responses, thereby reducing its generalization capabilities. For different tasks, the peak point of $\beta^{-}$ can be different. For example, in mathematical reasoning, we can set $\beta^{-}$ to be 0.01 and achieve better performance than vanilla DPO.
|
| 641 |
+
|
| 642 |
+
Figure 6 shows the change of the absolute value of gradient for the chosen and rejected responses $(\partial \ell^{\mathrm{DPO}} / \partial \pi^{+}$ and $\partial \ell^{\mathrm{DPO}} / \partial \pi^{-})$ during the training process of DPO on MATH. It can be seen that $|\partial \ell^{\mathrm{DPO}} / \partial \pi^{-}|\gg |\partial \ell^{\mathrm{DPO}} / \partial \pi^{-}|,$ and the increasing rate of $\partial \ell^{\mathrm{DPO}} / \partial \pi^{-}$ is much higher than that of $\partial \ell^{\mathrm{DPO}} / \partial \pi^{+}$ , which is align with Property 1.
|
| 643 |
+
|
| 644 |
+
Figure 8 and Figure 9 show the DPO convergence process with model trained on MATH and HHrlhf respectively. In the second epoch, the accuracy growth of the model slows down sharply, which indicates that the model overfits the training data. The results further confirm that DPO is an aggressive optimization strategy compared to RLHF-PPO, and makes the impact of 3D-properties more prominent.
|
| 645 |
+
|
| 646 |
+
Among the four scenarios, Scenario 1, where both chosen and rejected responses are on-policy, ensures a more stable DPO training process and delivers the best testing performance, as shown in Table 1. As shown in Table 7, Scenario 1 exhibits the slowest decline in likelihood compared to other scenarios, effectively mitigating the adverse effects of 3D-properties. This explains the superior performance of on-policy DPO in our tests.
|
| 647 |
+
|
| 648 |
+
In addition to pure on-policy and off-policy DPO, Scenario 2 where the chosen response is off-policy and the rejected response is on-policy is also commonly seen in the industry. For example, in some math problems, researchers used to treat the accurate solution in the dataset to be the chosen response and the wrong answer generated by the LLM itself to be the rejected response, which we show is harmful. Scenario 3 is a mirror experiment to Scenario 2. These additional experimental results confirm that as long as the off-policy data is mixed into the training preference dataset, the performance of DPO will be weakened.
|
| 649 |
+
|
| 650 |
+
# D.3 LIMITATIONS
|
| 651 |
+
|
| 652 |
+
Despite the insights provided in this study, several limitations remain. First, while our theoretical analysis and experiments highlight the 3D-properties as a key factor in DPO's suboptimal performance, the complexity of real-world LLMs may involve additional factors that were not fully explored. Second, our experimental evaluation, though spanning diverse tasks such as mathematical
|
| 653 |
+
|
| 654 |
+

|
| 655 |
+
Figure 6: When training with on-policy data, the absolute value of the gradient for rejected responses increases, while the absolute value of the gradient for chosen responses remains almost unchanged.
|
| 656 |
+
|
| 657 |
+

|
| 658 |
+
(a) Accuracy of RM and DPO on HH-rlhf eval set over the training process.
|
| 659 |
+
|
| 660 |
+

|
| 661 |
+
(b) Accuracy of RM and DPO on MATH eval set over the training process.
|
| 662 |
+
|
| 663 |
+

|
| 664 |
+
Figure 7: Comparison of DPO and RM Training. RM training demonstrates greater stability, while DPO training shows significant fluctuations.
|
| 665 |
+
(a) MATH train dataset loss.
|
| 666 |
+
Figure 8: Comparison between DPO and RM training on the training set of MATH. As can be seen, in the second epoch of DPO training, the loss is very small and the accuracy of distinguishing the chosen response from the rejected response is $100\%$ , which indicates that the model overfits the training data.
|
| 667 |
+
|
| 668 |
+

|
| 669 |
+
(b) MATH train dataset accuracy.
|
| 670 |
+
|
| 671 |
+
Table 7: Impact of on-policy training data on the results. $\log \pi (a^{+})$ and $\log \pi (a^{-})$ represent the average log probability per token for the chosen and rejected responses, respectively.
|
| 672 |
+
|
| 673 |
+
<table><tr><td></td><td colspan="2">Baichuan2-13B</td><td colspan="2">Baichuan2-33B</td></tr><tr><td></td><td>log π(a+)</td><td>log π(a-)</td><td>log π(a+)</td><td>log π(a-)</td></tr><tr><td>basemodel</td><td>-0.9181</td><td>-0.9393</td><td>-0.3603</td><td>-0.3634</td></tr><tr><td>DPO in Scenario 1</td><td>-0.9681</td><td>-0.9982</td><td>-0.3629</td><td>-0.3670</td></tr><tr><td>basemodel</td><td>-1.6776</td><td>-0.9238</td><td>-1.4314</td><td>-0.3525</td></tr><tr><td>DPO in Scenario 2</td><td>-1.6265</td><td>-1.2293</td><td>-1.2734</td><td>-0.4254</td></tr><tr><td>basemodel</td><td>-0.9461</td><td>-1.7110</td><td>-0.3460</td><td>-1.1204</td></tr><tr><td>DPO in Scenario 3</td><td>-0.8786</td><td>-1.8848</td><td>-0.3439</td><td>-1.4333</td></tr><tr><td>basemodel</td><td>-1.7468</td><td>-1.7110</td><td>-1.2838</td><td>-1.1451</td></tr><tr><td>DPO in Scenario 4</td><td>-1.6617</td><td>-1.8265</td><td>-1.2273</td><td>-1.2234</td></tr></table>
|
| 674 |
+
|
| 675 |
+
Table 8: Vanilla DPO: Baichuan2-13B and Baichuan2-33B accuracy on ${\mathrm{{MATH}}}^{ * }$ and SuperCLUE.
|
| 676 |
+
|
| 677 |
+
<table><tr><td rowspan="2"></td><td colspan="4">Baichuan2-13B</td><td colspan="4">Baichuan2-33B</td></tr><tr><td>MATH*5 points</td><td>4&5 points</td><td>SuperCLUE5 points</td><td>4&5 points</td><td>MATH*5 points</td><td>4&5 points</td><td>SuperCLUE5 points</td><td>4&5 points</td></tr><tr><td>basemodel</td><td>6.0%</td><td>12.2%</td><td>46.3%</td><td>58.8%</td><td>25.7%</td><td>36.5%</td><td>79.5%</td><td>84.4%</td></tr><tr><td>Scenario 1</td><td>7.9%</td><td>14.4%</td><td>52.8%</td><td>64.6%</td><td>29.9%</td><td>37.5%</td><td>80.2%</td><td>86.6%</td></tr><tr><td>Scenario 2</td><td>4.8%</td><td>9.2%</td><td>47.9%</td><td>61.8%</td><td>29.2%</td><td>36.6%</td><td>72.2%</td><td>79.0%</td></tr><tr><td>Scenario 3</td><td>4.3%</td><td>12.8%</td><td>41.2%</td><td>57.0%</td><td>28.2%</td><td>37.3%</td><td>74.8%</td><td>84.9%</td></tr><tr><td>Scenario 4</td><td>3.2%</td><td>9.3%</td><td>39.5%</td><td>52.9%</td><td>28.6%</td><td>38.1%</td><td>80.2%</td><td>85.8%</td></tr></table>
|
| 678 |
+
|
| 679 |
+
reasoning and instruction following, is limited in scope and may not generalize across all LLM applications. Additionally, the regularization techniques proposed, while effective in our controlled settings, require further validation in larger-scale models and more diverse datasets. Lastly, although we contrasted DPO with RM-based alignment, our study does not exhaustively address other potential reward-free methods, leaving open questions for future exploration.
|
| 680 |
+
|
| 681 |
+

|
| 682 |
+
(a) HH-rlhf train dataset loss.
|
| 683 |
+
|
| 684 |
+

|
| 685 |
+
(b) HH-rlhf train dataset accuracy.
|
| 686 |
+
Figure 9: Comparison between DPO and RM training on the test set of HH-rlhf.
|
| 687 |
+
|
| 688 |
+
Table 9: DPO and its variants/regularized version performance on mathematical reasoning. In Flex-DPO, $\beta^{+} = 0.1$ , $\beta^{-} = 0.08$ .
|
| 689 |
+
|
| 690 |
+
<table><tr><td></td><td colspan="2">MATH*</td><td colspan="2">SuperCLUE</td></tr><tr><td></td><td>5 points</td><td>4&5 points</td><td>5 points</td><td>4&5 points</td></tr><tr><td>basemodel</td><td>25.7%</td><td>36.5%</td><td>79.5%</td><td>84.4%</td></tr><tr><td>DPO</td><td>29.9%</td><td>37.5%</td><td>80.2%</td><td>86.6%</td></tr><tr><td>Flex-DPO</td><td>30.1%</td><td>38.0%</td><td>81.2%</td><td>86.7%</td></tr><tr><td>IPO</td><td>30.0%</td><td>37.7%</td><td>80.5%</td><td>85.9%</td></tr><tr><td>SLiC</td><td>29.3%</td><td>38.7%</td><td>79.7%</td><td>84.5%</td></tr></table>
|
3dpropertiesidentifyingchallengesindpoandchartingapathforward/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:565ab620ad351732d5e4c9aa7a68f08d32e60efb66f6dcd4ef48d6568b686957
|
| 3 |
+
size 1071385
|
3dpropertiesidentifyingchallengesindpoandchartingapathforward/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a2f587334c0f2f9ffdd2acd7824bb9e1b5a6d84cd78ebce2b2e771e652616560
|
| 3 |
+
size 729066
|
3dspatialmultimodalmemory/dfa8acf4-63ec-4e98-bb58-719ef5f72492_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:030a181ca05743c3f22dbbaa25dc61d83db629f34ec28272c051e9a3650e36aa
|
| 3 |
+
size 85720
|
3dspatialmultimodalmemory/dfa8acf4-63ec-4e98-bb58-719ef5f72492_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0d49976a17ec3b9e96d7eadfbdee3d4cf84bd0daba4f21b66ee64b5210c0394b
|
| 3 |
+
size 104107
|
3dspatialmultimodalmemory/dfa8acf4-63ec-4e98-bb58-719ef5f72492_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b250abc4382d29cee19d98008ba3b7b295150632cb95793021fd8ba8350bb70f
|
| 3 |
+
size 18084876
|
3dspatialmultimodalmemory/full.md
ADDED
|
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# M3: 3D-Spatial Multimodal Memory
|
| 2 |
+
|
| 3 |
+
Xueyan Zou<sup>1</sup>, Yuchen Song<sup>1</sup>, Ri-Zhao Qiu<sup>1</sup>, Xuanbin Peng<sup>1</sup>, Jianglong Ye<sup>1</sup>, Sifei Liu<sup>2</sup>, Xiaolong Wang<sup>1,2</sup>
|
| 4 |
+
|
| 5 |
+
Core Contribution
|
| 6 |
+
$^{1}$ UC San Diego $^{2}$ NVIDIA
|
| 7 |
+
|
| 8 |
+
https://m3-spatial-memory.github.io
|
| 9 |
+
|
| 10 |
+

|
| 11 |
+
|
| 12 |
+

|
| 13 |
+
CLIP
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
SigLIP
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
Embedding Space
|
| 20 |
+
Figure 1: Our proposed MultiModal Memory integrates Gaussian splatting with foundation models to efficiently store multimodal memory in a Gaussian structure. The feature maps rendered by our approach exhibit high fidelity, preserving the strong expressive capabilities of the foundation models.
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
SEEM
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
LLaMA3
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
LLaMAv
|
| 30 |
+
|
| 31 |
+
# ABSTRACT
|
| 32 |
+
|
| 33 |
+
We present 3D Spatial MultiModal Memory (M3), a multimodal memory system designed to retain information about medium-sized static scenes through video sources for visual perception. By integrating 3D Gaussian Splatting techniques with foundation models, M3 builds a multimodal memory capable of rendering feature representations across granularities, encompassing a wide range of knowledge. In our exploration, we identify two key challenges in previous works on feature splatting: (1) computational constraints in storing high-dimensional features for each Gaussian primitive, and (2) misalignment or information loss between distilled features and foundation model features. To address these challenges, we propose M3 with key components of principal scene components and Gaussian memory attention, enabling efficient training and inference. To validate M3, we conduct comprehensive quantitative evaluations of feature similarity and downstream tasks, as well as qualitative visualizations to highlight the pixel trace of Gaussian memory attention. Our approach encompasses a diverse range of foundation models, including vision-language models (VLMs), perception models, and large multimodal and language models (LMMs/LLMs). Furthermore, to demonstrate real-world applicability, we deploy M3's feature field in indoor scenes on a quadruped robot. Notably, we claim that M3 is the first work to address the core compression challenges in 3D feature distillation.
|
| 34 |
+
|
| 35 |
+
# 1 INTRODUCTION
|
| 36 |
+
|
| 37 |
+
Human perception encompasses the world across spatial dimensions. When encountering static elements in the visual environment, individuals tend to organize and store knowledge progressively, starting from a coarse overview and refining it into finer details. For instance, we can recall our daily surroundings with varying levels of detail, ranging from a high-level layout to specific part-level features. However, for larger-scale environments, our understanding tends to remain more coarse and generalized. Previous works, such as NeRF [23] and 3DGS [16], have demonstrated the
|
| 38 |
+
|
| 39 |
+
ability to store scene-level information at the pixel level for intermediate-scale scenes. However, these models lack the capability to retain the semantic understanding of the scene like humans.
|
| 40 |
+
|
| 41 |
+
In this study, we aim to develop a spatial memory system for static scenes, capable of processing static scene video clips spanning spatial horizons. The primary objective is to store all human-processable information in a format that is precise, efficient, and amenable to future interactive queries. Our approach leverages 3D Gaussian splatting techniques and incorporates features extracted from foundation models to construct scenes imbued with semantic knowledge. The selection of Gaussian splatting as our structural format was motivated by two key considerations: First, the need to address video redundancy through efficient compression, and second, the requirement for multi-granular information representation. Gaussian splatting inherently provides a framework for representing the smallest units of information as Gaussian primitives as well as naturally eliminating the spatial redundancy, aligning well with the motivations.
|
| 42 |
+
|
| 43 |
+
Previous feature splatting works such as F-3DGS [51] and F-Splat [26] directly distill 2D feature maps obtained from foundation models into 3D Gaussians via differentiable rendering. We observe two key issues: First, due to the computational limitations, the feature vector dimensions in Gaussian primitives are significantly reduced compared to the original 2D feature maps (typically 16-64 versus 1024), potentially causing an information bottleneck. Second, while the original feature maps may not be inherently 3D-consistent, enforcing 3D consistency in the Gaussians can cause misalignment between the original and distilled features. Consequently, the distilled feature may not accurately capture the knowledge embedded in the foundation model.
|
| 44 |
+
|
| 45 |
+
To address these issues, we present MultiModal Memory (M3), a better integration of Gaussian splatting and multimodal foundation models that efficiently store expressive multimodal memory in a Gaussian structure, facilitating spatial queries. Specifically, we propose to store the original high-dimensional 2D feature maps in a memory bank called principal scene components and use the low-dimensional principal queries from 3D Gaussians as indices. Instead of directly distilling the 2D features into 3D embeddings, we apply Gaussian memory attention between the principal scene components and principal queries to render the foundation model embeddings in a 3D scene.
|
| 46 |
+
|
| 47 |
+
In this way, we combine the best of both foundation models and Gaussian splatting: preserving the high expressive ability of the original foundation model feature maps while maintaining a 3D-consistent, low-dimensional Gaussian structure of the scene. Furthermore, we also design a heuristic algorithm to minimize redundancy in the memory bank by reducing the raw features from the video stream. These reduced features are referred to as Principal Scene Components. Example feature maps rendered by M3 are visualized in Fig. 1.
|
| 48 |
+
|
| 49 |
+
To evaluate M3, we employ a diverse set of foundation models, including vision-language models, LMM/LLMs, and perception models. We adopt both low-level metrics (e.g. PSNR) to assess the model's feature memorization capability and high-level metrics (e.g. mIoU, IR, TR) to assess its performance on downstream perception tasks. Extensive experiments demonstrate that M3 outperforms previous works in both memorization and downstream tasks while maintaining low computational costs. Lastly, we deploy M3 on a quadruped robot platform for grasping, showcasing its potential for real-world generalization from single-scene, multi-scene, and long-horizon tasks.
|
| 50 |
+
|
| 51 |
+
# 2 RELATED WORK
|
| 52 |
+
|
| 53 |
+
Foundation Models. The field of multimodal learning has seen remarkable progress, leading to the development of diverse foundation models. In the vision-language domain, models such as CLIP [28], Florence [40; 45], and the recent SigLIP [47] employ ViT-style [5] transformer architectures to align visual and linguistic representations. For vision-specific tasks, SAM [17; 29] excels in part-level clustering, while DINO [4; 24] advances self-supervised representation learning. In document understanding, LayoutLM [42; 41; 11] combines OCR and text classification for comprehensive document analysis. The language domain has seen significant advancements in reasoning capabilities, exemplified by the LLaMA [36; 37; 6] and Mistral [14; 15] series. While these works have pushed language reasoning to new heights, recent studies like [31; 35; 7] explore mixture-of-experts approaches to enhance visual representation learning in foundation models. These developments, along with advanced models such as ChatGPT [1] and Claude [2], form the backbone of modern Multimodal Large Language Models (MLLMs), paving the way for more sophisticated AI systems.
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
Figure 2: A scene (V) is composed of both structure (S) and knowledge (I). To model these, we leverage multiple foundation models to extract multi-granularity scene knowledge, and employ 3D Gaussian splatting to represent the spatial structure. By combining these techniques, we construct a spatial multimodal memory (M3), which enables downstream applications such as retrieval, captioning and grounding.
|
| 57 |
+
|
| 58 |
+
3D Gaussians and Feature Field. NeRF [23] revolutionized 3D scene representation, but its implicit nature caused slow rendering and training. 3D Gaussian Splatting [16] (3DGS) emerged as a faster, more explicit alternative, enabling rapid training and real-time rendering. Since then, 3DGS has been enhanced: H3DGS [44] improved large-scale rendering, Mip-Splatting tackled anti-aliasing for high detail, and WildGaussians [19] addressed occlusion and appearance changes. Grendel-GS [50] enabled multi-GPU training for efficiency with larger datasets. Researchers began incorporating 3D feature fields into neural rendering pipelines, moving from NeRF-based models like F3RM [32] to 3DGS-based ones. Feature 3DGS [51] added feature representations to 3DGS, leading to advancements like Feature Splatting [26; 13] for language-driven scene synthesis and LEGaussians [33] for open-vocabulary scene understanding. LiveScene [27] introduced interactive radiance fields, while recent work [46] focuses on improving 2D features with 3D-aware fine-tuning for better 2D-3D integration.
|
| 59 |
+
|
| 60 |
+
Scene Graph and Video Memory. Long-horizon scene understanding encompasses both spatial and temporal dimensions. For spatial modeling, scene graphs have been prominent: ConceptFusion [12] introduced open-set multimodal 3D mapping, ConceptGraphs [8] extended this to open-vocabulary 3D scene graphs, and Hierarchical Open-Vocabulary 3D Scene Graphs [39] applied these concepts to language-grounded navigation. Beyond Bare Queries [21] and Open Scene Graphs [22] further demonstrated their utility in object retrieval and navigation. However, these approaches often rely on heuristic edge/node construction and lack direct LMM integration via embeddings. For temporal aspects, previous works have focused on using memory bank embeddings to store information across frames. For instance, MA-LMM [9], MovieChat [34], and Hierarchical Memory [38] introduced various memory augmentation techniques for video understanding. Flash-VStream [48] and Streaming Long Video Understanding [25] concentrated on real-time processing of long video streams. While these temporal methods integrate better with LMMs, they face challenges such as image over-compression (representing an entire frame with a single embedding), frame redundancy (adjacent frames containing overlapping spatial information), and lack of explicit spatial information. Our 3D Gaussian approach bridges this gap, combining spatial precision with temporal flexibility and LMM compatibility.
|
| 61 |
+
|
| 62 |
+
# 3 METHOD
|
| 63 |
+
|
| 64 |
+
# 3.1 3D-SPATIAL MULTIMODAL MEMORY (M3) PIPELINE.
|
| 65 |
+
|
| 66 |
+
A real-world visual perception scene (V) consists of both structure (S) and knowledge (I). The structure of Visual Granularity ( $\mathcal{VG}$ ) can range from the fine details such as leaf shapes, to large-scale elements, such as city layouts. Concurrently, the Knowledge Space ( $\kappa S$ ) spans scales from specific information, such as leaf species (e.g. a red maple leaf) to a comprehensive interpretation (e.g. The space needle in Seattle...) of a view ( $\mathbf{V}_*$ ). Gaussian splatting serves as a framework for constructing scene structure with finest granularity, represented as gaussian primitives, while foundation models provide vast world knowledge spanning various scales for scene knowledge. The organic integration of Gaussian splatting and Foundation Models infuses scene structure with multi-
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
(b) Gaussian Memory Attention
|
| 70 |
+
Figure 3: Given a video sequence, we utilize foundation models $(\mathbf{F})$ to extract raw features $(\mathbf{R})$ . These features are reduced using Algorithm 1, producing principal scene components (PSC), which are stored in a memory bank. We introduce an optizable attribute queries $(q)$ to Gaussian primitives, and apply a Gaussian Memory Attention $(\mathbf{A}_{gm})$ mechanism to produce the final rendered features $(\hat{\mathbf{R}})$ , which can be linked back to various heads of the foundation models.
|
| 71 |
+
|
| 72 |
+
granularity knowledge, enabling the construction of a full-stack Multimodal Memory of the scene with precise spatial information. To maintain efficiency while preserving the global representation of foundation model features, we compress the extracted features from foundation models into principal scene components (PSC) for each scene and learn to probe the scene via Gaussian Splatting parameters, denoted as principle query $(\mathbf{Q}_p)$ . Ultimately, leveraging the rendering capabilities of Gaussian Splatting, we can dynamically populate the GS structure with multi-granularity information spanning the entire view of the scene. Our pipeline is illustrated in Fig. 2.
|
| 73 |
+
|
| 74 |
+
# 3.2 M3 PRELIMINARIES.
|
| 75 |
+
|
| 76 |
+
Visual Granularity (VG). Visual granularity (VG) typically represents the clustering pixel scope of an image, a concept introduced in Semantic-SAM [20]. Given a view $\mathbf{V}_{*} \in \mathbb{R}^{h \times w,3}$ ( $h, w$ denote the pixel dimensions) in the scene $\mathbf{V}$ , it is composed of multi-granularity segments ranging from individual pixels to the full view (as illustrated in the left part of Fig. 2), represented by $\mathbf{V}_{*} = \{V_{*}^{1}, V_{*}^{2}, \dots, V_{*}^{m}\}$ , where $V_{*}^{i} \in \mathbb{R}^{p,3}$ is the $i^{\text{th}}$ granularity of the view $\mathbf{V}_{*}$ , $p$ is the number of pixels, and $m$ denotes the total number of granularities. This multi-granularity approach is introduced because humans naturally possess multi-granularity recognition of the world for various utilities.
|
| 77 |
+
|
| 78 |
+
Knowledge Space $(\mathcal{K}\mathcal{S})$ . Different foundation models $(\mathbf{F})$ focus on various aspects of knowledge. For instance, CLIP [28] and SigLIP [47] concentrate on image-level perception, while Semantic-SAM [20] emphasizes part-level visual grouping. In contrast, LLaMA3/v [6] incorporates both local and global attention mechanisms. The features generated by these models occupy different knowledge spaces $\mathbf{F}(\mathbf{V}_*) \in \{\mathcal{K}\mathcal{S}^1, \mathcal{K}\mathcal{S}^2, \dots, \mathcal{K}\mathcal{S}^c\}$ where $c$ is the total number of knowledge spaces here, emphasizing diverse aspects such as visual alignment $(\mathcal{K}\mathcal{S}^1)$ , semantics $(\mathcal{K}\mathcal{S}^2)$ , reasoning $(\mathcal{K}\mathcal{S}^3)$ , and etc.
|
| 79 |
+
|
| 80 |
+
Principle Scene Components (PSC) and Principle Query $(\mathbf{Q}_p)$ . We extract foundation model features for each view, denoted as $\mathbf{F}_{*}(\mathbf{V}) = \{\mathbf{E}_{1}^{*},\mathbf{E}_{2}^{*},\dots \mathbf{E}_{n}^{*}\}$ for each foundation model $(\mathbf{F}_{*})$ and scene $(\mathbf{V})$ , where $n$ is the number of views. These foundation model features are represented as $\mathbf{E}_i^*\in \mathbb{R}^{[h\times w,d]}$ , where $h,w$ denote the feature pixel dimensions. However, different views often contain redundant and similar features. We define the key features that construct the scene as Principle Scene Components (PSC), drawing inspiration from the terminology of Principal Component Analysis. The attribute within Gaussian representation responsible for indexing PSC is denoted as Principle Query $(\mathbf{Q}_p)$ , which is learnable parameters in each Gaussian primitive.
|
| 81 |
+
|
| 82 |
+
# 3.3 SPATIAL MULTIMODAL MEMORY
|
| 83 |
+
|
| 84 |
+
Build Scene Structure via 3D Gaussians. We formally define the input of M3 as a video sequence with frames, where each frame corresponds to a view $\mathbf{V}_{*}$ . 3D Gaussian splatting [16] is employed
|
| 85 |
+
|
| 86 |
+
Algorithm 1 Raw Feature (R) Similarity Reduction Algorithm
|
| 87 |
+
Input: $\mathbf{R}\in \mathbb{R}^{[n\times h\times w,d]}$ (Raw Features), $\theta \in (0,1]$ (threshold), $c\in \mathbb{N}$ (chunk size) Output: PSC $\subseteq$ R (Principle Scene Components)
|
| 88 |
+
1 SimilarityReduction(R, $\theta ,c)$ n←|R# Number of raw features
|
| 89 |
+
2 $\hat{\mathbf{R}}\gets \{\frac{e_i}{\|e_i\|_2}:e_i\in \mathbf{R}\}$ #Normalize raw features
|
| 90 |
+
3 I $\leftarrow 0\#$ Set of filtered indices
|
| 91 |
+
4 U $\leftarrow \{0\} ^n$ # Usage mask, initially all false
|
| 92 |
+
5 for k←0 to $\lfloor \frac{n}{c}\rfloor -1$ ..
|
| 93 |
+
6 Ck← {e: i ∈ [kc, (k+1)c) ∩ N} # Current chunk
|
| 94 |
+
7 Sk← Ck·R T # Similarity matrix for chunk
|
| 95 |
+
8 for j←0 to $|C_k| - 1$ .. if $U_{kc + j} = 0$ .. J← {i:Sk,j,i≥θ} # Similar indices
|
| 96 |
+
11 if ∀i∈J:Ui=0:
|
| 97 |
+
12 I $\leftarrow I\cup \{kc + j\}$ # Select Principle Components
|
| 98 |
+
13 VjEJ:Ui←1
|
| 99 |
+
|
| 100 |
+
to fit the scene, with each view rendered by the Gaussian rasterizer. For each Gaussian primitive, the estimizable attributes include the centroid $(x\in \mathbb{R}^3)$ , rotation quaternion $(r\in \mathbb{R}^4)$ storing the rotation and scaling matrix, opacity value $(\alpha \in \mathbb{R}^{3})$ , and spherical harmonics $(sh\in \mathbb{R}^3)$ . To model the Principle Scene Components (PSC), we introduce an additional estimizable attribute: principle queries $(q\in \mathbb{R}^l)$ with flexible dimensionality to accommodate various foundation models. Each foundation model utilizes $s$ degrees from the $\mathbf{Q}_p\in \mathbb{R}^l$ . These degrees are rendered alongside Gaussian parameters to produce view-based principle queries $\mathbf{Q}_p^{\mathrm{V}^*}$ with shape $[H,W,l]$ . Following [51], the colors and principle queries are rendered as:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
C = \sum_ {i \in N} c _ {i} \alpha_ {i} T _ {i}, \quad \mathbf {Q} _ {p} = \sum_ {i \in N} q _ {i} \alpha_ {i} T _ {i}, \quad \text {w h e r e} T _ {i} = \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}) \tag {1}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
Here, $N$ represents the set of sorted Gaussians overlapping with the given pixel, and $T_{i}$ denotes the transmittance, defined as the product of opacity values of previous Gaussians overlapping the same pixel.
|
| 107 |
+
|
| 108 |
+
Extract Multi-Granularity Scene Knowledge. Upon preparing the attributes in the Gaussian primitives, we extract multi-granularity scene knowledge via foundation models. Different foundation models focus on different aspects of knowledge projection and granularity, as illustrated in Fig. 4. In this paper, we employ a set of foundation models $\mathbf{F} = \{\mathrm{CLIP}, \mathrm{SigLIP}, \mathrm{DINOv2}, \mathrm{LLaMA3}, \mathrm{LLaMAv}, \mathrm{SEEM}\}$ , where LLaMAv is the vision-instruct version of LLaMA3. For each view, we extract foundation model embeddings, formally expressed as $\mathbf{F}(\mathbf{V}_*) = \mathbf{E} \in \mathbb{R}^{[h \times w, d]}$ .
|
| 109 |
+
|
| 110 |
+
We implement specific algorithms for projecting LLaMA3 language embeddings and SEEM [52] visual prompts into pixel-level features. For LLaMA3, we first use SoM [43] and Semantic-SAM to extract language descriptions for each region. The language prompt of each region is represented as $\mathbf{T} \in \mathbb{R}^{[l_1,d]}$ , where $l_1$ is the number of regions extracted by Semantic-SAM. For SEEM, we utilize visual prompts corresponding to each region, with visual prompts for each image represented as $\mathbf{O} \in \mathbb{R}^{[l_2,d]}$ , where $l_2$ is the number of regions seg-
|
| 111 |
+
|
| 112 |
+
mented by SEEM. We then sput the features to the pixel level, duplicating the prompts within each mask region, resulting in $\mathbf{T}$ and $\mathbf{O}$ being indexed to the dimension of $\mathbb{R}^{[h\times w,d]}$ .
|
| 113 |
+
|
| 114 |
+
After feature extraction, we obtain raw features $(\mathbf{R} \in \mathbb{R}^{[n,h \times w,d]})$ for the full Scene with $n$ views within each foundation model. These raw features span various granularities and knowledge spaces, providing a comprehensive multimodal (vision and language) understanding of the scene. In correlation to 3D Gaussian Splatting, the smallest granularity component is at the pixel level, with the most low-level knowledge projection being the RGB color value.
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
Figure 4: The UMAP visualization of model embedding manifolds reveals distinct shapes, reflecting different focus.
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
Input:
|
| 121 |
+
A round wooden table [...]
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
DINOv:
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
CLIP/SigLIP:
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
LLaMA3.1:
|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
|
| 136 |
+

|
| 137 |
+
LLaMA3.2:
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
Figure 5: Illustration of patch-level visual embedding extraction their applications.
|
| 143 |
+
|
| 144 |
+

|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
|
| 148 |
+

|
| 149 |
+
|
| 150 |
+
Compress Scene Knowledge to Memory. While the scene knowledge is extracted from foundation models $\mathbf{F}$ into raw feature space $\mathbf{R} \in \mathbb{R}^{[n,h \times w,d]}$ , the dimensionality is too high for storage and rendering in each scene. Previous works such as F-3DGS [51] and F-Splat [26] address this issue through feature distillation. However, we observe two major problems with feature distillation: (a) The distilled feature experiences information loss compared to the original feature due to feature compression (usually 16 to 64 dims sampling linear projection to the original dimension $d$ that is usually 1000 dims or more). (b) The upsampled feature may have misalignments with the original knowledge space of the foundation model, making it difficult to be decoded by the original $\mathbf{F}$ . To resolve these issues, we first flatten the raw features $(\mathbf{R})$ into $\mathbb{R}^{[n \times h \times w,d]}$ , and then perform similarity reduction on the first dimension using Algorithm 1. The reduced raw feature represents the principal scene component (PSC), also named as memory bank, serving as the essential representation of the scene. The memory bank or PSC has dimensionality $\mathbb{R}^{[t,d]}$ , where $t$ depends on the similarity threshold we set. The reduction is effective due to the presence of many duplicated features in neighboring spatial pixels within a view, or duplicated regions across views. We visualize the memory bank building process in Fig.3 a.
|
| 151 |
+
|
| 152 |
+
Gaussian Memory Attention. Given view-based principle queries $\mathbf{Q}_p^{\mathbf{V}^*} \in \mathbb{R}^{[H,W,n]}$ that is rasterized by gaussian primitives, and principle scene components $\mathbf{PSC} \in \mathbb{R}^{[l,d]}$ , we perform Gaussian Memory Attention ( $\mathbf{A}_{gm}$ ) to obtain the rendered feature in alignment with foundation models. With learnable random initialized memory projection $\mathbf{W}_m \in \mathbb{R}^{[n,d]}$ , we formally define the Gaussian Memory Attention as follows:
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\hat {\mathbf {R}} = \mathbf {A} _ {g m} \left(\mathbf {Q} _ {p} ^ {\mathbf {V} _ {*}}\right) = \operatorname {S o f t m a x} \left(\mathbf {Q} _ {p} ^ {\mathbf {V} _ {*}} \times \mathbf {W} _ {m} \times \mathbf {P S C} ^ {T}\right) \times \mathbf {P S C}. \tag {2}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
This Gaussian memory attention links the $\mathbf{Q}_p$ with PSC and projects it into the corresponding foundation model knowledge space. The attention process is depicted in Fig. 3 b.
|
| 159 |
+
|
| 160 |
+
Scene Rendering and Deployment. Given the rendered features $\hat{\mathbf{R}}$ for each foundation model aligning with the corresponding foundation model space, we can link back to the powerful functions of foundation models. We expect that for models like CLIP, SigLIP, SEEM, the rendered feature can be directly used for vision language-based tasks such as retrieval and grounding. For generative-based models like LLaMA3, LLaMAv, we anticipate that the feature can be directly used in captioning or simple visual question answering tasks. Formally, we express this as $\mathcal{X} = \mathbf{F}_{\mathrm{dec}}(\hat{\mathbf{R}})$ .
|
| 161 |
+
|
| 162 |
+
# 4 EXPERIMENTS
|
| 163 |
+
|
| 164 |
+
# 4.1 EXPERIMENTAL SETUP
|
| 165 |
+
|
| 166 |
+
Datasets. To support extensive quantitative and qualitative evaluation, we perform experiments using several existing scene datasets [3; 18; 10] and collected a custom robot dataset (M3-Robot) using a quadruped robot and a drone. Specifically, we use Garden (an outdoor scene) from MipNeRF360 [3], Train from the Tank & Temples dataset [18], and PlayRoom as well as DrJohnson from the Deepblending dataset [10]. For the M3-Robot dataset, we collect images using two mobile robots. The Table-Top sequence is collected from a RealSense 405D camera mounted on the end effector of a Unitree Z1 robot arm on a Unitree B1 quadruped robot, where a human operator teleoperates the robot with a remote to obtain centripetal views of tabletop objects. Images in the Geisel sequence are collected by a tele-operated DJI Mini4-Pro drone. The collected images are processed by COLMAP [30] to obtain camera parameters and initialization.
|
| 167 |
+
|
| 168 |
+
Memory across multiple Foundation Models. The multi-modal memory mechanism allows M3 to retain knowledge from many models, which differs from existing distillation-based methods that
|
| 169 |
+
|
| 170 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Method</td><td rowspan="2"># Param</td><td colspan="2">DINOv2</td><td colspan="2">CLIP</td><td colspan="2">SigLIP</td><td colspan="2">SEEM</td><td colspan="2">LLaMA3</td><td colspan="2">LLaMAv</td></tr><tr><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td></tr><tr><td rowspan="3">Train</td><td>F-Splat [26]</td><td>61M</td><td>0.6833</td><td>1.9835</td><td>0.5998</td><td>0.4779</td><td>0.6346</td><td>0.7851</td><td>0.4269</td><td>11.72</td><td>0.5300</td><td>0.2900</td><td>0.7026</td><td>56.23</td></tr><tr><td>F-3DGS [51]</td><td>61M</td><td>0.3790</td><td>1.0108</td><td>0.3330</td><td>0.1540</td><td>0.3692</td><td>0.3328</td><td>0.1063</td><td>0.1034</td><td>0.4993</td><td>0.0150</td><td>0.6288</td><td>46.48</td></tr><tr><td>M3</td><td>35M</td><td>0.5321</td><td>1.681</td><td>0.3140</td><td>0.2800</td><td>0.2811</td><td>0.5096</td><td>0.1389</td><td>0.2251</td><td>0.4401</td><td>0.0253</td><td>0.7069</td><td>53.43</td></tr><tr><td rowspan="3">Garden</td><td>F-Splat [26]</td><td>61M</td><td>0.7328</td><td>1.9567</td><td>0.7005</td><td>1.3570</td><td>0.7247</td><td>0.8698</td><td>0.4224</td><td>9.4675</td><td>0.4944</td><td>0.3314</td><td>0.7443</td><td>60.83</td></tr><tr><td>F-3DGS [51]</td><td>61M</td><td>0.2295</td><td>0.6033</td><td>0.2105</td><td>0.0945</td><td>0.2697</td><td>0.2585</td><td>0.1071</td><td>0.1424</td><td>0.4139</td><td>0.0141</td><td>0.4913</td><td>43.08</td></tr><tr><td>M3</td><td>35M</td><td>0.5701</td><td>1.7279</td><td>0.3168</td><td>0.2876</td><td>0.2927</td><td>0.0004</td><td>0.1839</td><td>0.3469</td><td>0.3387</td><td>0.0217</td><td>0.7235</td><td>58.04</td></tr><tr><td rowspan="3">Drjohnson</td><td>F-Splat [26]</td><td>61M</td><td>0.8107</td><td>2.0333</td><td>0.6689</td><td>0.7877</td><td>0.6826</td><td>0.7744</td><td>0.4650</td><td>10.411</td><td>0.3757</td><td>0.0145</td><td>0.8184</td><td>54.82</td></tr><tr><td>F-3DGS [51]</td><td>61M</td><td>0.4190</td><td>1.1279</td><td>0.3344</td><td>0.1537</td><td>0.3846</td><td>0.3552</td><td>0.1693</td><td>0.2169</td><td>0.3853</td><td>0.0150</td><td>0.6669</td><td>47.35</td></tr><tr><td>M3</td><td>35M</td><td>0.5878</td><td>1.7553</td><td>0.3435</td><td>0.2924</td><td>0.2975</td><td>0.5366</td><td>0.2456</td><td>0.4179</td><td>0.3175</td><td>0.0226</td><td>0.7224</td><td>52.68</td></tr><tr><td rowspan="3">Playroom</td><td>F-Splat [26]</td><td>61M</td><td>0.7956</td><td>1.9640</td><td>0.6458</td><td>0.7808</td><td>0.6839</td><td>0.7678</td><td>0.4745</td><td>10.873</td><td>0.3915</td><td>0.0136</td><td>0.8185</td><td>59.42</td></tr><tr><td>F-3DGS [51]</td><td>61M</td><td>0.4867</td><td>1.2193</td><td>0.3813</td><td>0.1726</td><td>0.4571</td><td>0.4094</td><td>0.1714</td><td>0.2103</td><td>0.3987</td><td>0.0139</td><td>0.6922</td><td>52.50</td></tr><tr><td>M3</td><td>35M</td><td>0.6074</td><td>1.7545</td><td>0.3260</td><td>0.2987</td><td>0.2951</td><td>0.5623</td><td>0.2560</td><td>0.4584</td><td>0.3555</td><td>0.0241</td><td>0.7288</td><td>57.38</td></tr></table>
|
| 171 |
+
|
| 172 |
+
Table 1: Feature Distance in comparison with distillation methods that use similar or higher budgets across datasets and foundation models.
|
| 173 |
+
|
| 174 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Method</td><td rowspan="2">#Param</td><td colspan="4">CLIP</td><td colspan="6">SigLIP</td></tr><tr><td>mIoU</td><td>cIoU</td><td>AP50</td><td>AP60</td><td>I2T@1</td><td>I2T@5</td><td>I2T10</td><td>T2I@1</td><td>T2I@5</td><td>T2I10</td></tr><tr><td rowspan="3">Train</td><td>Ground Truth</td><td>-</td><td>25.3</td><td>26.3</td><td>14.7</td><td>3.3</td><td>81.5</td><td>97.3</td><td>100.0</td><td>71.0</td><td>89.4</td><td>92.1</td></tr><tr><td>F-3DGS [51]</td><td>61M</td><td>24.2</td><td>24.3</td><td>16.3</td><td>7.1</td><td>2.6</td><td>13.2</td><td>28.9</td><td>0.0</td><td>2.6</td><td>18.4</td></tr><tr><td>M3</td><td>35M</td><td>25.4</td><td>26.5</td><td>19.6</td><td>12.5</td><td>55.2</td><td>84.2</td><td>92.1</td><td>52.6</td><td>84.2</td><td>92.1</td></tr><tr><td rowspan="3">Playroom</td><td>Ground Truth</td><td>-</td><td>25.6</td><td>24.2</td><td>9.6</td><td>3.0</td><td>96.5</td><td>100.0</td><td>100.0</td><td>62.0</td><td>96.5</td><td>100.0</td></tr><tr><td>F-3DGS [51]</td><td>61M</td><td>23.8</td><td>21.4</td><td>11.9</td><td>3.0</td><td>79.3</td><td>96.6</td><td>96.6</td><td>31.0</td><td>79.3</td><td>89.7</td></tr><tr><td>M3</td><td>35M</td><td>23.1</td><td>23.1</td><td>11.9</td><td>5.9</td><td>72.4</td><td>96.6</td><td>100.0</td><td>41.3</td><td>65.5</td><td>68.9</td></tr><tr><td rowspan="3">Geisel</td><td>Ground Truth</td><td>-</td><td>19.5</td><td>21.4</td><td>5.3</td><td>0.0</td><td>100.0</td><td>100.0</td><td>100.0</td><td>60.0</td><td>85.7</td><td>91.4</td></tr><tr><td>F-3DGS [51]</td><td>61M</td><td>19.0</td><td>20.4</td><td>14.1</td><td>1.2</td><td>45.7</td><td>94.3</td><td>100.0</td><td>0.0</td><td>20.0</td><td>34.3</td></tr><tr><td>M3</td><td>35M</td><td>21.8</td><td>23.5</td><td>16.5</td><td>11.8</td><td>100.0</td><td>100.0</td><td>100.0</td><td>71.4</td><td>85.7</td><td>94.2</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Table 2: Feature/RGB metrics for all foundation models and scene.
|
| 177 |
+
|
| 178 |
+
only distill a few (2-3) models. Specifically, as provided in Sec. 3.3, we employ 6 foundation models to resemble human memory of different aspects. Each model has different granularity and focus of different semantics: image-level vision-language understanding via CLIP [28] as well as SigLIP [47]; pixel-level semantic understanding via SEEM [52]; self-supervised structural feature via DINOv2 [24]; and LLaMA3.1/3.2v [6] for multi-modal understanding and reasoning.
|
| 179 |
+
|
| 180 |
+
In Fig. 5, we provide a comprehensive illustration of how we extract features from foundation models. The extracted features are marked in orange in alignment with language representations optionally or continued to be the input of the language Encoder.
|
| 181 |
+
|
| 182 |
+
Loss Computation. For each input image, we extract patch-level embeddings from the aforementioned models. Previous methods [26; 51] compute the patch-wise distance loss on the rendered features, this not only has a high volume of GPU memory consumption that hinders parallel training for all the foundation models but also creates artifacts when downsampling the feature. In compensate, we use point-based loss, where we sample 2000 points ranging from both predict and ground truth features for distance loss computation. This largely reduces the computation overhead for training, as shown in Table. 1.
|
| 183 |
+
|
| 184 |
+
Low-level Evaluation Metrics. To systematically evaluate multi-modal memory, we use evaluation metrics ranging from low/pixel-level to high-level downstream tasks. In particular, the low-level evaluation metrics evaluate pixel-level image quality. For rendered image quality on evaluation views (views not provided in training), we use common metrics (PSNR, SSIM, and LPIPS [49]) as Kerbl et al. [16]. For feature quality, we report cosine and L2 distance.
|
| 185 |
+
|
| 186 |
+
High-level Evaluation Metrics. High-level evaluation metrics, different from low-level ones, focus on evaluating downstream tasks of features. For discriminative models [28; 47; 52], we will report commonly used metrics such as mIoU (mean Intersection over Union), cIoU (complete Intersection over Union), and AP (Average Precision). For retrieval, we will use IR@1 (Information Retrieval at rank 1) and TR@1 (Text Retrieval at rank 1).
|
| 187 |
+
|
| 188 |
+
# 4.2 QUANTITATIVE RESULTS
|
| 189 |
+
|
| 190 |
+
Baseline Implementation For quantitative experiments, we compare M3 with two recent distillation-based feature GS methods [26; 51]. For fair comparisons, we train all the methods in approximately 30,000 iterations (29,993 iterations for M3 due to last-batch data loader roundoffs). The reference training features are identical for different methods. For distillation-based methods, we follow F-Splat [26] to render a latent feature and then decode the latent features to the embedding
|
| 191 |
+
|
| 192 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Method</td><td rowspan="2">RGB PSNR↑</td><td rowspan="2">Time min.</td><td colspan="2">CLIP</td><td colspan="2">SigLIP</td><td colspan="2">DINOv2</td><td colspan="2">SEEM</td><td colspan="2">LLaMA3</td><td colspan="2">LLaMAv</td></tr><tr><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td></tr><tr><td rowspan="6">Tabletop</td><td>+CLIP</td><td>21.91</td><td>~6</td><td>0.3100</td><td>0.2956</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>+SigLIP</td><td>21.84</td><td>~10</td><td>0.3100</td><td>0.2956</td><td>0.3122</td><td>0.0005</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>+DINOv2</td><td>21.79</td><td>~15</td><td>0.3101</td><td>0.2956</td><td>0.3123</td><td>0.0005</td><td>0.5161</td><td>1.6057</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>+SEEM</td><td>21.93</td><td>~20</td><td>0.3101</td><td>0.2956</td><td>0.3123</td><td>0.0005</td><td>0.5156</td><td>1.6048</td><td>0.0472</td><td>0.1013</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>+LLaMA3</td><td>21.97</td><td>~30</td><td>0.3101</td><td>0.2956</td><td>0.3122</td><td>0.0005</td><td>0.5160</td><td>1.6056</td><td>0.0472</td><td>0.1012</td><td>0.3628</td><td>0.0246</td><td>-</td><td>-</td></tr><tr><td>+LLaMAv (All)</td><td>21.96</td><td>~45</td><td>0.3100</td><td>0.2956</td><td>0.3122</td><td>0.0005</td><td>0.5157</td><td>1.6049</td><td>0.0472</td><td>0.1013</td><td>0.3628</td><td>0.0246</td><td>0.7262</td><td>59.92</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 3: Ablation on the number of foundation models in M3.
|
| 195 |
+
|
| 196 |
+
<table><tr><td rowspan="2">Degree</td><td rowspan="2"># Params</td><td rowspan="2">Iteration</td><td colspan="4">CLIP</td><td colspan="4">SigLIP</td><td colspan="2">DINOv2</td><td colspan="2">SEEM</td><td colspan="2">LLaMA3</td></tr><tr><td>Cosine↓</td><td>L2↓</td><td>mIoU</td><td>AP50</td><td>Cosine↓</td><td>L2↓</td><td>mIoU</td><td>AP50</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td><td>Cosine↓</td><td>L2↓</td></tr><tr><td rowspan="2">8x6=48</td><td rowspan="2">14.8M</td><td>30k</td><td>0.3256</td><td>0.2880</td><td>25.4</td><td>19.6</td><td>0.2913</td><td>0.5239</td><td>19.4</td><td>2.1</td><td>0.5755</td><td>1.7664</td><td>0.1672</td><td>0.2749</td><td>0.4504</td><td>0.0264</td></tr><tr><td>7k</td><td>0.3290</td><td>0.2900</td><td>25.3</td><td>14.6</td><td>0.2938</td><td>0.5277</td><td>21.8</td><td>4.8</td><td>0.5845</td><td>1.7835</td><td>0.2058</td><td>0.3463</td><td>0.4517</td><td>0.0265</td></tr><tr><td rowspan="2">16x6=96</td><td rowspan="2">21.5M</td><td>30k</td><td>0.3140</td><td>0.2800</td><td>25.7</td><td>19.0</td><td>0.2866</td><td>0.5172</td><td>24.3</td><td>10.3</td><td>0.5535</td><td>1.7239</td><td>0.1388</td><td>0.2247</td><td>0.4480</td><td>0.0261</td></tr><tr><td>7k</td><td>0.3206</td><td>0.2842</td><td>25.3</td><td>20.6</td><td>0.2903</td><td>0.5227</td><td>23.2</td><td>8.1</td><td>0.5677</td><td>1.7513</td><td>0.1828</td><td>0.3056</td><td>0.4504</td><td>0.0263</td></tr><tr><td rowspan="2">32x6=192</td><td rowspan="2">34.8M</td><td>30k</td><td>0.3043</td><td>0.2735</td><td>26.7</td><td>22.8</td><td>0.2814</td><td>0.5094</td><td>25.7</td><td>11.9</td><td>0.5318</td><td>1.6807</td><td>0.0972</td><td>0.1553</td><td>0.4401</td><td>0.0253</td></tr><tr><td>7k</td><td>0.3132</td><td>0.2792</td><td>26.2</td><td>21.1</td><td>0.2866</td><td>0.5172</td><td>25.5</td><td>11.4</td><td>0.5515</td><td>1.7198</td><td>0.1269</td><td>0.2139</td><td>0.4436</td><td>0.0256</td></tr><tr><td rowspan="2">64x6=384</td><td rowspan="2">61.4M</td><td>30k</td><td>0.2917</td><td>0.2650</td><td>28.4</td><td>23.9</td><td>0.2721</td><td>0.4957</td><td>28.5</td><td>13.5</td><td>0.5099</td><td>1.6358</td><td>0.0855</td><td>0.1321</td><td>0.4278</td><td>0.0241</td></tr><tr><td>7k</td><td>0.3049</td><td>0.2734</td><td>28.1</td><td>23.9</td><td>0.2802</td><td>0.5079</td><td>27.8</td><td>13.5</td><td>0.5350</td><td>1.6870</td><td>0.1012</td><td>0.1676</td><td>0.4348</td><td>0.0248</td></tr></table>
|
| 197 |
+
|
| 198 |
+
Table 4: Ablation on the dimensions and distillation for each foundation model.
|
| 199 |
+
|
| 200 |
+
space of reference features with a multi-head MLP. For all methods, the optimization of both latent features/memory and decoders is trained from scratch for each scene.
|
| 201 |
+
|
| 202 |
+
Low-Level Results. We report the main quantitative results in Tab. 1, where the average training time and the auxiliary low-level metrics are reported. Our method, M3, outperforms F-Splat while reducing significantly compute than F-3DGS. SEEM and LLaMA3 features extraction failed on F-Splat, which we assume was mainly due to the ground truth feature extraction procedure, where duplication was performed to each segmentation to get pixel-level features.
|
| 203 |
+
|
| 204 |
+
Downstream Results. The downstream evaluation results of grounding and retrieval are shown in Table. 2. The ground truth grounding dataset of Train, Playroom, and Geisel is generated by SoM [] with semantic-SAM for mask labels and GPT4-o [] to generate the caption. Example data are shown in the Appendix. We evaluate all the images in the validation sets of the three datasets. The grounding results clearly show that M3 is better than F-3DGS with half of the parameters, the gap is non-trivial especially when looking at the AP50/AP60 columns. In addition to grounding results, we also evaluate M3 on image text retrieval, similar to grounding we use GPT4-o to generate ground truth data for three datasets. The example data are also shown in the appendix. Compared to grounding performance, M3 performs much better than F-3DGS on retrieval results. For image text retrieval, the positive example is the evaluation image, and the negative pairs are generated by COCO datasets. We believe the large gap in the retrieval results is taken from Gaussian memory attention, where the rendered features are aligned with the original foundation model much better. When we find the correct embeddings in the dataset, this benefit gets enlarged.
|
| 205 |
+
|
| 206 |
+
Ablation Results. Table. 3 shows the ablation of the number of foundation models involved in M3. We gradually added foundation models from simpler single-modal models to more advanced multimodal models. While maintaining a very efficient training time, our method has independent results from different foundation models. Our implementation is based on Grendel-GS, where the training procedure is efficiently paralleled. In Table. 4, we ablate the computation budget on training M3 in the balance of memory footprint, training iterations, and performance. The table clearly shows that increasing the number degree will generally improve the performance on all metrics. While having 16 degrees for each foundation model is enough to obtain a reasonable performance. That is what the number is reported in the paper. In addition, increasing training iteration will generally increase the performance, while 1/4 of the training budget (7k) would usually get a reasonable performance.
|
| 207 |
+
|
| 208 |
+
# 4.3 QUALITATIVE RESULTS.
|
| 209 |
+
|
| 210 |
+
M3 consistently demonstrates superior performance across diverse datasets as shown in Fig. 6, by effectively preserving fine-grained details and ensuring smooth, coherent feature representations. The method excels at retaining intricate details such as the textures of chairs and the fine features of books, highlighting its ability to capture micro-level information. This clear layering contributes to rich semantic understanding within the scenes.
|
| 211 |
+
|
| 212 |
+
Furthermore, M3 handles overlapping objects exceptionally well, as evident in the Playroom dataset, where complex arrangements are rendered with accurate structural information. The outputs from various foundation models are consistently high-quality, each retaining spatial structures and semantic information at different granularities. This demonstrates M3's capability to capture both
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
Figure 6: Qualitative results across datasets using M3. The figure showcases the consistent performance of the M3 across various datasets (Garden, Playroom, Drjohnson, Table-top).
|
| 216 |
+
|
| 217 |
+
low-level spatial details and high-level semantic concepts, making it highly effective for tasks that require comprehensive scene understanding.
|
| 218 |
+
|
| 219 |
+
# 4.4 DEMONSTRATION RESULTS.
|
| 220 |
+
|
| 221 |
+
We also deployed M3 on a quadruped robot platform to demonstrate the potential real world applications of our model. In this experiment, we first tele-operate the robot to scan the table by taking a centripetal video with onboard camera. After memorizing the scene with M3, the robot is able to locate and grasp any object with text query on decoded CLIP feature. With known robot pose through LiDAR, we are able to render any camera pose $c_{0}T_{c_{t}}$ with:
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{ } _ { c _ { 0 } } T _ { c _ { t } } = { } _ { c _ { 0 } } T _ { e _ { 0 } } \times { } _ { e _ { 0 } } T _ { b _ { 0 } } \times { } _ { b _ { 0 } } T _ { l _ { 0 } } \times { } _ { l _ { 0 } } T _ { w } \times { } _ { w } T _ { l _ { t } } \times { } _ { l _ { t } } T _ { b _ { t } } \times { } _ { b _ { t } } T _ { e _ { t } } \times { } _ { e _ { t } } T _ { c _ { t } } = { } _ { l _ { 0 } } T _ { w } \times { } _ { w } T _ { l _ { t } } , \tag {3}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
where $c, e, b, l$ and $w$ refer to camera, end effector, arm base, lidar, and world respectively. Note that to align with COLMAP coordinates, the camera pose needs modifying with $\text{COLMAP}_0 T_{c_0} \times c_0 T_{c_t}$ .
|
| 228 |
+
|
| 229 |
+
We tested with the query "yellow bath duck" on the decoded CLIP feature, and as shown in Fig. 7, the rubber duck is highlighted in red. The robot can then locate the 3D position of the targeted object with depth information from its depth camera and perform a grasping task.
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Figure 7: Real robot deployment.
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+
Conclusion. This paper introduces M3, a novel approach combining foundation models with Gaussian Splatting to create a spatial multimodal memory resembling human memory. M3 demonstrates superior downstream task accuracy with reduced training costs and shows practical utility when deployed on a real robot. One interesting future direction is to design a reasoning module that is capable of directly operating on the optimized memory bank, which we leave to future study.
|
| 245 |
+
|
| 246 |
+
Acknowledgement. This work was supported, in part, by NSF CAREER Award IIS-2240014, and NSF CCF-2112665 (TILOS). This research project has benefitted from the Microsoft Accelerate Foundation Models Research (AFMR) grant program.
|
| 247 |
+
|
| 248 |
+
# REFERENCES
|
| 249 |
+
|
| 250 |
+
[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
|
| 251 |
+
[2] Anthropic. The claude 3 model family: Opus, sonnet, haiku. Technical report, Anthropic, 2023. URL https://www.anthropic.com. Accessed: 2023-09-19.
|
| 252 |
+
[3] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mipnerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5470-5479, 2022.
|
| 253 |
+
[4] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9650-9660, 2021.
|
| 254 |
+
[5] Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
|
| 255 |
+
[6] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
|
| 256 |
+
[7] Xiaoran Fan, Tao Ji, Changhao Jiang, Shuo Li, Senjie Jin, Sirui Song, Junke Wang, Boyang Hong, Lu Chen, Guodong Zheng, et al. Mousi: Poly-visual-expert vision-language models. arXiv preprint arXiv:2401.17221, 2024.
|
| 257 |
+
[8] Qiao Gu, Ali Kuwajerwala, Sacha Morin, Krishna Murthy Jatavallabhula, Bipasha Sen, Aditya Agarwal, Corban Rivera, William Paul, Kirsty Ellis, Rama Chellappa, et al. Conceptgraphs: Open-vocabulary 3d scene graphs for perception and planning. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pp. 5021-5028. IEEE, 2024.
|
| 258 |
+
[9] Bo He, Hengduo Li, Young Kyun Jang, Menglin Jia, Xuefei Cao, Ashish Shah, Abhinav Shrivastava, and Ser-Nam Lim. Ma-lmm: Memory-augmented large multimodal model for long-term video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13504-13514, 2024.
|
| 259 |
+
[10] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. 2018.
|
| 260 |
+
[11] Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. Layoutmv3: Pre-training for document ai with unified text and image masking. In Proceedings of the 30th ACM International Conference on Multimedia, pp. 4083-4091, 2022.
|
| 261 |
+
[12] Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, et al. Conceptfusion: Open-set multimodal 3d mapping. arXiv preprint arXiv:2302.07241, 2023.
|
| 262 |
+
[13] Mazeyu Ji, Ri-Zhao Qiu, Xueyan Zou, and Xiaolong Wang. Graspsplats: Efficient manipulation with 3d feature splatting. arXiv preprint arXiv:2409.02084, 2024.
|
| 263 |
+
[14] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
|
| 264 |
+
[15] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
|
| 265 |
+
|
| 266 |
+
[16] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023.
|
| 267 |
+
[17] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4015-4026, 2023.
|
| 268 |
+
[18] Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics, 2017.
|
| 269 |
+
[19] Jonas Kulhanek, Songyou Peng, Zuzana Kukelova, Marc Pollefeys, and Torsten Sattler. Wildgaussians: 3d gaussian splatting in the wild. arXiv preprint arXiv:2407.08447, 2024.
|
| 270 |
+
[20] Feng Li, Hao Zhang, Peize Sun, Xueyan Zou, Shilong Liu, Jianwei Yang, Chunyuan Li, Lei Zhang, and Jianfeng Gao. Semantic-sam: Segment and recognize anything at any granularity. arXiv preprint arXiv:2307.04767, 2023.
|
| 271 |
+
[21] Sergey Linok, Tatiana Zemskova, Svetlana Ladanova, Roman Titkov, and Dmitry Yudin. Beyond bare queries: Open-vocabulary object retrieval with 3d scene graph. arXiv preprint arXiv:2406.07113, 2024.
|
| 272 |
+
[22] Joel Loo, Zhanxin Wu, and David Hsu. Open scene graphs for open world object-goal navigation. arXiv preprint arXiv:2407.02473, 2024.
|
| 273 |
+
[23] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021.
|
| 274 |
+
[24] Maxime Oquab, Timothee Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.
|
| 275 |
+
[25] Rui Qian, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Shuangrui Ding, Dahua Lin, and Jiaqi Wang. Streaming long video understanding with large language models. arXiv preprint arXiv:2405.16009, 2024.
|
| 276 |
+
[26] Ri-Zhao Qiu, Ge Yang, Weijia Zeng, and Xiaolong Wang. Feature splatting: Language-driven physics-based scene synthesis and editing. arXiv preprint arXiv:2404.01223, 2024.
|
| 277 |
+
[27] Delin Qu, Qizhi Chen, Pingrui Zhang, Xianqiang Gao, Bin Zhao, Dong Wang, and Xuelong Li. Livescene: Language embedding interactive radiance fields for physical scene rendering and control. arXiv preprint arXiv:2406.16038, 2024.
|
| 278 |
+
[28] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021.
|
| 279 |
+
[29] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714, 2024.
|
| 280 |
+
[30] Johannes L Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2016.
|
| 281 |
+
[31] Jinghuan Shang, Karl Schmeckpeper, Brandon B May, Maria Vittoria Minniti, Tarik Kelestemur, David Watkins, and Laura Herlant. Theia: Distilling diverse vision foundation models for robot learning. arXiv preprint arXiv:2407.20179, 2024.
|
| 282 |
+
[32] William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, and Phillip Isola. Distilled feature fields enable few-shot language-guided manipulation. arXiv preprint arXiv:2308.07931, 2023.
|
| 283 |
+
|
| 284 |
+
[33] Jin-Chuan Shi, Miao Wang, Hao-Bin Duan, and Shao-Hua Guan. Language embedded 3d gaussians for open-vocabulary scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5333-5343, 2024.
|
| 285 |
+
[34] Enxin Song, Wenhao Chai, Guanhong Wang, Yucheng Zhang, Haoyang Zhou, Feiyang Wu, Haozhe Chi, Xun Guo, Tian Ye, Yanting Zhang, et al. Moviechat: From dense token to sparse memory for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18221-18232, 2024.
|
| 286 |
+
[35] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian-1: A fully open, vision-centric exploration of multimodal lms. arXiv preprint arXiv:2406.16860, 2024.
|
| 287 |
+
[36] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
|
| 288 |
+
[37] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Jasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
|
| 289 |
+
[38] Yiqin Wang, Haoji Zhang, Yansong Tang, Yong Liu, Jiashi Feng, Jifeng Dai, and Xiaojie Jin. Hierarchical memory for long video qa. arXiv preprint arXiv:2407.00603, 2024.
|
| 290 |
+
[39] Abdelrhman Werby, Chenguang Huang, Martin Büchner, Abhinav Valada, and Wolfram Burgard. Hierarchical open-vocabulary 3d scene graphs for language-grounded robot navigation. In First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA 2024, 2024.
|
| 291 |
+
[40] Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, and Lu Yuan. Florence-2: Advancing a unified representation for a variety of vision tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4818–4829, 2024.
|
| 292 |
+
[41] Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. Layoutmv2: Multi-modal pre-training for visually-rich document understanding. arXiv preprint arXiv:2012.14740, 2020.
|
| 293 |
+
[42] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 1192-1200, 2020.
|
| 294 |
+
[43] Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. Set- of-mark prompting unleashes extraordinary visual grounding in gpt-4v. arXiv preprint arXiv:2310.11441, 2023.
|
| 295 |
+
[44] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19447-19456, 2024.
|
| 296 |
+
[45] Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432, 2021.
|
| 297 |
+
[46] Yuanwen Yue, Anurag Das, Francis Engelmann, Siyu Tang, and Jan Eric Lenssen. Improving 2d feature representations by 3d-aware fine-tuning. arXiv preprint arXiv:2407.20229, 2024.
|
| 298 |
+
[47] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11975-11986, 2023.
|
| 299 |
+
|
| 300 |
+
[48] Haoji Zhang, Yiqin Wang, Yansong Tang, Yong Liu, Jiashi Feng, Jifeng Dai, and Xiaojie Jin. Flash-vstream: Memory-based real-time understanding for long video streams. arXiv preprint arXiv:2406.08085, 2024.
|
| 301 |
+
[49] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
|
| 302 |
+
[50] Hexu Zhao, Haoyang Weng, Daohan Lu, Ang Li, Jinyang Li, Aurojit Panda, and Saining Xie. On scaling up 3d gaussian splatting training. arXiv preprint arXiv:2406.18533, 2024.
|
| 303 |
+
[51] Shijie Zhou, Haoran Chang, Sicheng Jiang, Zhiwen Fan, Zehao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, and Achuta Kadambi. Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21676-21685, 2024.
|
| 304 |
+
[52] Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao, and Yong Jae Lee. Segment everything everywhere all at once. Advances in Neural Information Processing Systems, 36, 2024.
|
3dspatialmultimodalmemory/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:093b64afd9e9bcd44b17442972d3de4d2b3071035ed7dac45cda75aca90723eb
|
| 3 |
+
size 708929
|
3dspatialmultimodalmemory/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3fa02a64bcfcd105d7804e3a12600ce7e813802c0cd333415a5c32d7eb0177ee
|
| 3 |
+
size 416587
|
3dstreetunveilerwithsemanticaware2dgsasimplebaseline/3f23c088-7d80-4cce-b138-008f4d7a0b93_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cbcd078cefd170b9b87256da1920bf43cbdf28c4ca48fb770cf84f97f21baacd
|
| 3 |
+
size 141434
|
3dstreetunveilerwithsemanticaware2dgsasimplebaseline/3f23c088-7d80-4cce-b138-008f4d7a0b93_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6e9ac5fbaf37914785840977e7abeca8f83fa95aefd035a6765e7e3279b506a0
|
| 3 |
+
size 179701
|
3dstreetunveilerwithsemanticaware2dgsasimplebaseline/3f23c088-7d80-4cce-b138-008f4d7a0b93_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fa7eabf0b405101a2226cbc4d5b7ea6e43a4a0867449a1cb85228ff522144a13
|
| 3 |
+
size 18201801
|
3dstreetunveilerwithsemanticaware2dgsasimplebaseline/full.md
ADDED
|
@@ -0,0 +1,562 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D STREETUNVEILER WITH SEMANTIC-AWARE 2DGS - A SIMPLE BASELINE
|
| 2 |
+
|
| 3 |
+
Jingwei Xu $^{1}$ , Yikai Wang $^{2*}$ , Yiqun Zhao $^{3,6}$ , Yanwei Fu $^{5}$ , Shenghua Gao $^{3,4\ddagger}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ ShanghaiTech University $^{2}$ Nanyang Technological University $^{3}$ The University of Hong Kong $^{4}$ HKU Shanghai Intelligent Computing Research Center $^{5}$ Fudan University $^{6}$ Transcengram xujw2023@shanghaiitech.edu.cn yikai.wang@ntu.edu.sg yiqun.zhao@connect.hku.hk yanweifu@fudan.edu.cn gaosh@hku.hk
|
| 6 |
+
|
| 7 |
+
# ABSTRACT
|
| 8 |
+
|
| 9 |
+
Unveiling an empty street from crowded observations captured by in-car cameras is crucial for autonomous driving. However, removing all temporarily static objects, such as stopped vehicles and standing pedestrians, presents a significant challenge. Unlike object-centric 3D inpainting, which relies on thorough observation in a small scene, street scene cases involve long trajectories that differ from previous 3D inpainting tasks. The camera-centric moving environment of captured videos further complicates the task due to the limited degree and time duration of object observation. To address these obstacles, we introduce StreetUnveiler to reconstruct an empty street. StreetUnveiler learns a 3D representation of the empty street from crowded observations. Our representation is based on the hard-label semantic 2D Gaussian Splatting (2DGS) for its scalability and ability to identify Gaussians to be removed. We inpaint rendered image after removing unwanted Gaussians to provide pseudo-labels and subsequently re-optimize the 2DGS. Given its temporal continuous movement, we divide the empty street scene into observed, partial-observed, and unobserved regions, which we propose to locate through a rendered alpha map. This decomposition helps us to minimize the regions that need to be inpainted. To enhance the temporal consistency of the inpainting, we introduce a novel time-reversal framework to inpaint frames in reverse order and use later frames as references for earlier frames to fully utilize the long-trajectory observations. Our experiments conducted on the street scene dataset successfully reconstructed a 3D representation of the empty street. The mesh representation of the empty street can be extracted for further applications.
|
| 10 |
+
|
| 11 |
+
# 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Accurate 3D reconstruction of an empty street scene from an in-car camera video is crucial for autonomous driving. It provides reliable digital environments that simulate real-world street scenarios. Although this is an important task, it is seldomly studied in previous works because of its challenging nature in the following aspects: (1) Lack of ground truth data for pre-training inpainting models specialized for street scenes; (2) The camera-centric moving captures objects from limited angles and for brief periods; (3) The long trajectory of in-car camera videos leads to objects appearing and disappearing at different time points, complicating object removal.
|
| 14 |
+
|
| 15 |
+
But there still exists a blessing we can take from the long trajectory moving-forward nature. As the car moves forward, objects that disappear from the later frame will only be visible in previous video frames. This gives a hint about maintaining the temporal consistency of the same regions.
|
| 16 |
+
|
| 17 |
+
To address the challenge of reconstructing an empty street, we introduce StreetUnveiler, a reconstruction method targeting unveiling the empty representation of long-trajectory street scenes.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: We achieve accurate empty street reconstruction from in-car camera videos. With the aid of the proposed hard-label semantic 2D Gaussian Splatting and time-reversal inpainting framework, we remove the unwanted objects with satisfactory appearance and geometry of occluded regions.
|
| 21 |
+
|
| 22 |
+
StreetUnveiler involves several key steps. First, it reconstructs the observed 3D representation and identifies unobserved regions occluded by objects. Then, it uses a time-reversal inpainting framework to consistently inpaint these unobserved regions as pseudo labels. Finally, it re-optimizes the 3D representation based on these pseudo labels. The overall pipeline is illustrated in Fig. 1.
|
| 23 |
+
|
| 24 |
+
StreetUnveiler first reconstructs the original parked-up street with Gaussian Splatting (GS) due to its scalability and editability. However, as is illustrated in Fig. 2, inpainting with the naïve object mask (orange mask) often results in blurring and loss of details in large inpainted regions, which is a common issue in the previous works Mirzaei et al. (2023); Weder et al. (2023); Wang et al. (2023a); Weber et al. (2024); Liu et al. (2024). Generating masks for completely unobservable regions (blue mask) that are invisible from any viewpoint remains a challenge. Recent work Liu et al. (2024) requires user-provided masks, which is impractical for long trajectories. Moreover, the messy appearance of these regions after removing the Gaussians makes it difficult to use methods like SAM Kirillov et al. (2023). To address the difficulty of finding an ideal inpainting mask, we propose to generate the mask through the rendered alpha map and reconstruct the scene using a hard-label semantic 2DGS Huang et al. (2024a) instead of 3DGS Kerbl et al. (2023). 2DGS has a high opacity value for Gaussians, resulting in low alpha values in completely unobservable regions. A semantic distortion loss and a shrinking loss are employed to further reduce the rendered alpha values of the completely unobservable regions. This approach automatically generates masks for unobservable regions without user input, leading to better inpainting results.
|
| 25 |
+
|
| 26 |
+
Furthermore, we propose a time-reversal inpainting framework to enhance the temporal consistency of inpainting results in completely unobservable regions. By inpainting the video frames in reverse order, we use the later frame as a reference to inpaint the earlier frame. When the video is played in reverse, the object in the later frame will transition only from near to far in the camera view as the camera moves away from the object in a reversed time-space. This method uses a high-to-low-resolution guiding approach instead of filling an area larger than the reference region, as in the low-to-high-resolution approach, which results in more consistent inpainting. Finally, the inpainted pixels are used as pseudo labels to guide the re-optimization of 2DGS. This enables our method to learn a scalable 2DGS model that represents an empty street while preserving the appearance integrity of regions visible in other views.
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
Figure 2: (a) The mask of the whole unwanted object; (b) Inpainting with (a) mask; (c) Generate the inpainting mask through a rendered alpha map. The pixel with a low alpha value is selected as an inpainted pixel; (d) Inpainting with the generated (c) mask.
|
| 30 |
+
|
| 31 |
+
Our contribution can be summarized as follows:
|
| 32 |
+
|
| 33 |
+
- We propose representing the street as hard-label semantic 2DGS, optimizing the 3D scene with semantic guidance for scalable representation and improved instance decoupling.
|
| 34 |
+
- We use a rendered alpha map to locate completely unobservable regions and apply a semantic distortion loss and a shrinking loss to create a reasonable inpainting mask for these regions
|
| 35 |
+
- We introduce a novel time-reversal inpainting framework for long-trajectory scenes, enhancing the temporal consistency of inpainting results for re-optimization. Experiments show that our method can reconstruct an empty street from in-car camera video containing obstructive elements.
|
| 36 |
+
|
| 37 |
+
# 2 RELATED WORK
|
| 38 |
+
|
| 39 |
+
Neural scene representation and reconstruction. The use of neural radiance fields (NeRF)Mildenhall et al. (2020) to represent 3D scenes inspired a lot of follow-up work based on the original approach. Some worksMuller et al. (2022); Chen et al. (2022); Sun et al. (2022); Sara Fridovich-Keil and Alex Yu et al. (2022); Yu et al. (2022); Xiao et al. (2024) explore explicit representations such as low-rank matrices, hash grids, or voxel grids to increase the model capacity of original MLPs. Some works explored multiple separate MLPs Reiser et al. (2021); Kundu et al. (2022); Fu et al. (2022) to represent instances and backgrounds separately. However, these scale-up strategies are complicated to implement at the scale of street scenes. Existing works Xu et al. (2023); Tancik et al. (2022); Turki et al. (2022); Lu et al. (2023); Rematas et al. (2022); Yang et al. (2023b); Wang et al. (2023c); Turki et al. (2023); Meuleman et al. (2023); Wang et al. (2023b); Guo et al. (2023); Zhang et al. (2023b); Siddiqui et al. (2023) explored mesh-based, primitive-based, or grid-based representations for large-scale street scenes. However, both grid-based representation Guo et al. (2023) and mesh-based representation Wang et al. (2023c) may be constrained by their limited topology, making it hard to decouple the scene into separate instances. Recent advances in point-based rendering techniques Kerbl et al. (2023); Lassner & Zollhofer (2021); Xu et al. (2022); Huang et al. (2024a) can achieve both high-quality and fast rendering speed. The point-based nature of Gaussian Splatting enables scalability for street scenes. While recent works Chen et al. (2023b); Yan et al. (2024); Lin et al. (2024b); Ren et al. (2024); Cheng et al. (2024) have explored the reconstruction of large-scale scenes using Gaussian Splatting, our work focuses on the unveiling stage of a street scene, which is more important for autonomous driving and more challenging.
|
| 40 |
+
|
| 41 |
+
3D scene manipulation and inpainting. Early works Wang et al. (2021); Yuan et al. (2020); Philip & Drettakis (2018); Thonat et al. (2016); Anguelov et al. (2010); Liu et al. (2018); Yu et al. (2018; 2019); Yi et al. (2020); Zhao et al. (2021); Mirzaei et al. (2024) explored street scene editing by leveraging single-view or multi-view image inpainting networks. With the rapid development of Neural Scene Representation, editing a 3D scene has been explored by lots of works Chong Bao and Bangbang Yang et al. (2022); Zhao et al. (2024b); Yang et al. (2023a); Yuan et al. (2022); Bao et al. (2023); Kobayashi et al. (2022); Kerr et al. (2023); Peng et al. (2023); Zhao et al. (2024a). Edit-NeRF Liu et al. (2021) pioneered shape and color editing of neural fields using latent codes. Subsequent works Bao et al. (2023); Kobayashi et al. (2022); Kerr et al. (2023); Peng et al. (2023) utilized CLIP models to provide editing guidance from text prompts or reference images. Recent works Weder et al. (2023); Zhang et al. (2022); Mirzaei et al. (2023); Xiang et al. (2023); Chen et al. (2023a); Fang et al. (2023); Ye et al. (2023); Weber et al. (2024); Wang et al. (2024c); Lin et al. (2024a); Mirzaei et al. (2024); Prabhu et al. (2023) also explored 2D stylization and inpainting techniques, utilizing pretrained Diffusion Priors Rombach et al. (2022) for editing 3D scenes. Specifically, Chen et al. (2023a); Fang et al. (2023); Ye et al. (2023); Wang et al. (2024c) investigated these approaches in collaboration with Gaussian Splatting. Unlike them, our work focuses on street scene object removal and empty street reconstruction, which is more challenging.
|
| 42 |
+
|
| 43 |
+
Image and video inpainting. Image inpainting Bertalmio et al. (2000) aims to fulfill the missing region within an image. Standard approaches included GAN-based methods Pathak et al. (2016); Zhao et al. (2020), attention-based methods Yu et al. (2018); Liu et al. (2019), transformer-based methods Wan et al. (2021); Liu et al. (2022), and more recently, diffusion-based methods Rombach et al. (2022); Wang et al. (2024a). Control-Net Zhang et al. (2023a) enabled generating images with additional conditions on the frozen diffusion models. Recently, LeftRefill Cao et al. (2024) learned to guide the frozen diffusion inpainting models with extra conditions of the reference image, enabling
|
| 44 |
+
|
| 45 |
+
multi-view inpainting on the frozen diffusion model. However, these image inpainting methods mainly focused on the static scenario. Video inpainting considers the temporal consistent inpainting in the continuous image sequence, utilizing approaches like 3D CNN Wang et al. (2019); Hu et al. (2020), temporal shifting Zou et al. (2021), flow guidance Kim et al. (2019); Xu et al. (2019); Li et al. (2022), temporal attentions Ren et al. (2022), to name a few. However, these video inpainting methods hardly considered the long trajectory movement of cameras. In contrast, in our paper, we focus on the inpainting of large-scale street scenes. Furthermore, the 2DGS representation used in our paper enables the free-view rendering of the inpainted video.
|
| 46 |
+
|
| 47 |
+
# 3 PROBLEM FORMULATION
|
| 48 |
+
|
| 49 |
+
Given in-car camera videos and the Lidar data of a parked-up street, our goal is to remove all temporarily static objects in the street, like stopping vehicles and standing pedestrians, and finally reconstruct an empty street. This task, named as Street Unveiling, is to reconstruct scenes devoid of these static obstacles, providing an empty representation of the street environment. Such representations are mainly represented by 3D models for free-view rendering. This task holds significant implications for autonomous driving systems, urban planning, and scene understanding applications.
|
| 50 |
+
|
| 51 |
+
Street Unveiling shares some similarities with related tasks but cannot be addressed using existing approaches. (1) 3D reconstruction primarily involves modeling a primary image or scene with an object-centric camera. In contrast, Street Unveiling focuses on the background, aiming to remove foreground objects to reveal an empty street. The absence of ground truth further differentiates it from standard 3D reconstruction tasks. (2) Video inpainting typically deals with videos captured by fixed or minimally moving cameras, featuring one or a few central objects. Conversely, Street Unveiling involves long camera trajectories without central objects. These distinctions require different capabilities and novel methods to address the unique challenges of Street Unveiling.
|
| 52 |
+
|
| 53 |
+
# 4 SEMANTIC STREET RECONSTRUCTION
|
| 54 |
+
|
| 55 |
+
We opt for 2D Gaussian Splatting Huang et al. (2024a) (2DGS) as our scene representation for its rendering speed and editability. We first introduce the 2DGS in Sec. 4.1. Subsequently, we elaborate our algorithm tailored for street unveiling using 2DGS in Sec. 4.2 and Sec. 4.3.
|
| 56 |
+
|
| 57 |
+
# 4.1 PRELIMINARY: 2D GAUSSIAN SPLATING
|
| 58 |
+
|
| 59 |
+
Our reconstruction stage builds upon the state-of-the-art point-based renderer with the splendid geometry performance, 2DGS Huang et al. (2024a). 2DGS is defined by several key components: the central point $\mathbf{p}_k$ , two principal tangential vectors $\mathbf{t}_u$ and $\mathbf{t}_v$ that determine its orientation, and a scaling vector $\mathbf{S} = (s_u, s_v)$ controlling the variances of the 2D Gaussian distribution.
|
| 60 |
+
|
| 61 |
+
2D Gaussian Splatting represents the scene's geometry as a set of 2D Gaussians. A 2D Gaussian is defined in a local tangent plane in world space, parameterized as follows:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
P (u, v) = \mathbf {p} _ {k} + s _ {u} \mathbf {t} _ {u} u + s _ {v} \mathbf {t} _ {v} v. \tag {1}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
For the point $\mathbf{u} = (u,v)$ in $uv$ space, its 2D Gaussian value can then be evaluated using the standard Gaussian function:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\mathcal {G} (\mathbf {u}) = \exp \left(- \frac {u ^ {2} + v ^ {2}}{2}\right). \tag {2}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
The center $\mathbf{p}_k$ , scaling $(s_u, s_v)$ , and the rotation $(\mathbf{t}_u, \mathbf{t}_v)$ are learnable parameters. Each 2D Gaussian primitive has opacity $\alpha$ and view-dependent appearance $\mathbf{c}$ with spherical harmonics. For volume rendering, Gaussians are sorted according to their depth value and composed into an image with front-to-back alpha blending:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\mathbf {c} (\mathbf {x}) = \sum_ {i = 1} \mathbf {c} _ {i} \alpha_ {i} \mathcal {G} _ {i} (\mathbf {u} (\mathbf {x})) \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j} \mathcal {G} _ {j} (\mathbf {u} (\mathbf {x}))). \tag {3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $\mathbf{x}$ represents a homogeneous ray emitted from the camera and passing through $uv$ space.
|
| 80 |
+
|
| 81 |
+
# 4.2 2DGS FOR STREET SCENE RECONSTRUCTION
|
| 82 |
+
|
| 83 |
+
2DGS Huang et al. (2024a) features for its accurate geometry reconstruction of the object surface. However, the application of 2DGS to reconstruct objects devoid of surfaces, such as the sky in an open-air street scene, remains unexplored. We aim to reconstruct the street scene as a radiance field and semantic field using 2DGS. More details about radiance field reconstruction are included in the supplementary.
|
| 84 |
+
|
| 85 |
+
Learning 2D Gaussians with semantic guidance. We aim to augment the radiance field of street scenes with editability. Inspired from Guo et al. (2022); Yan et al. (2024); Chen et al. (2023b); Zhou et al. (2024), we harness the power of 2D semantic segmentation and distill such knowledge back to 2D Gaussians. To do so, we inject each 2D Gaussian with a 'hard' semantic label. The 'hard' means that the semantic label is non-trainable, which differs from the learnable 'soft' label used in recent works Zhou et al. (2024); Yan et al. (2024); Zhou et al. (2023b). Note that although our 'hard' semantic label is not trainable, it allows for rendering correct 2D semantic maps by altering its opacity, rotation, scaling, and position. This encourages the points with the same semantic labels to gather closer, facilitating accurate object removal in 3D space. Assume that each 2D Gaussian associated with a one-hot encoded semantic label s, we render the 2D semantic map as:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\hat {S} (\mathbf {x}) = \sum_ {i = 1} \mathrm {s} _ {i} \alpha_ {i} \mathcal {G} _ {i} (\mathbf {u} (\mathbf {x})) \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j} \mathcal {G} _ {j} (\mathbf {u} (\mathbf {x}))). \tag {4}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
During our densification, the newly generated splats will inherit the original hard semantic labels.
|
| 92 |
+
|
| 93 |
+
# 4.3 OPTIMIZATION OF 2DGS FOR STREET UNVEILING
|
| 94 |
+
|
| 95 |
+
In this part, we first introduce standard objectives used by previous approaches to optimize 2DGS Huang et al. (2024a). Then we discuss the inferiority of these objectives in the street scene and propose the newly introduced objectives tailored for Street Unveiling. In summary, our objectives consist of photo-metric loss, semantic loss, normal consistency loss, two different depth distortion losses, and shrinking loss.
|
| 96 |
+
|
| 97 |
+
Standard approach: As in 3DGS Kerbl et al. (2023), we use $\mathcal{L}_1$ loss and D-SSIM loss for supervising RGB color, with $\lambda = 0.2$ :
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathcal {L} _ {\mathrm {r g b}} = (1 - \lambda) \mathcal {L} _ {1} + \lambda \mathcal {L} _ {\mathrm {D} - \text {S S I M}}. \tag {5}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
Following 2DGS Huang et al. (2024a), depth distortion loss and normal consistency loss are adopted to refine the geometry property of the 2DGS representation of the street scene.
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathcal {L} _ {\mathrm {d}} = \sum_ {i, j} \omega_ {i} \omega_ {j} | z _ {i} - z _ {j} | \quad \mathcal {L} _ {n} = \sum_ {i} \omega_ {i} \left(1 - \mathbf {n} _ {i} ^ {\top} \mathbf {N}\right) \tag {6}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
Here, $\omega_{i}$ represents the blending weight of the $i$ -th intersection. $z_{i}$ denotes the depth of the intersection points. $\mathbf{n}_i$ is the normal of the splat facing the camera. $\mathbf{N}$ is the estimated normal at nearby depth point $\mathbf{p}$ .
|
| 110 |
+
|
| 111 |
+
We employ Cross-Entropy (CE) loss to supervise semantic labels:
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\mathcal {L} _ {s} (\mathbf {x}) = \operatorname {C E} \left(\hat {S} (\mathbf {x}), S (\mathbf {x})\right) \tag {7}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where $S$ is a pseudo semantic map extracted from a pre-trained segmentation model Xie et al. (2021).
|
| 118 |
+
|
| 119 |
+
Inferiority of standard objectives. In Street Unveiling, the scene semantics are expected to be maintained in a less messy and more consistent way to better recognize the Gaussians of objects to remove. However, solely naive depth distortion won't hinder the merging of the 2DGS Huang et al. (2024a) with different semantic labels, leading to noisy semantic information about the 3D world. Meanwhile, the noisy Gaussians in the unseen region will still exist if we don't find a way to eliminate them. These problems will further harm the generation of an ideal inpainting mask.
|
| 120 |
+
|
| 121 |
+
Clean up objectives. To reduce the noise in the semantic fields, we propose a semantic depth distortion loss $\mathcal{L}_{\mathrm{ds}}$ and a shrinking loss $\mathcal{L}_{\alpha}$ on opacity $\alpha$ :
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\mathcal {L} _ {\mathrm {d s}} = \sum_ {k} \mathcal {L} _ {\mathrm {d}} ^ {k} \quad \mathcal {L} _ {\alpha} = \frac {1}{N} \sum_ {p} \alpha_ {p} \tag {8}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $k$ iterates over each semantic label and $\mathcal{L}_{\mathrm{d}}^{k}$ denotes the distortion loss of 2DGS Huang et al. (2024a) with same semantic labels. This semantic depth distortion loss is exerted on the rendered result of the Gaussians with the same semantic label. Intuitively, it will encourage the 2DGS with the same label to have a more consistent depth at the pixel level. Shrinking loss will further eliminate the Gaussians that are actually unseen by any viewpoint. $\alpha_{p}$ represents the opacity value $\alpha$ of each Gaussian. $N$ is the total number of Gaussians.
|
| 128 |
+
|
| 129 |
+
The total loss is given as
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\mathcal {L} = \mathcal {L} _ {\mathrm {r g b}} + \lambda_ {d} \mathcal {L} _ {\mathrm {d}} + \lambda_ {n} \mathcal {L} _ {\mathrm {n}} + \lambda_ {d s} \mathcal {L} _ {\mathrm {d s}} + \lambda_ {s} \mathcal {L} _ {\mathrm {s}} + \lambda_ {\alpha} \mathcal {L} _ {\alpha} \tag {9}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
We empirically set $\lambda_{d} = 100$ , $\lambda_{n} = 0.05$ , $\lambda_{ds} = 100$ , $\lambda_{s} = 0.1$ , and $\lambda_{\alpha} = 0.001$ .
|
| 136 |
+
|
| 137 |
+
# 5 EMPTY STREET RECONSTRUCTION
|
| 138 |
+
|
| 139 |
+
A common strategy Mirzaei et al. (2023); Weder et al. (2023) to inpaint within small scenes is utilizing 2D inpainting methods to inpaint removed objects in the image space for re-optimization. However, lots of problems arise when it comes to the street scene. (1) Some views result in overblurry inpainting results due to the huge size of the inpainting mask, as is illustrated in Fig. 2(b); (2) Some occluded regions of the street struggle to maintain consistency because they are exposed to a large number of views over the long trajectory. These challenges will make it more vulnerable to inconsistent inpainting.
|
| 140 |
+
|
| 141 |
+
In the context of point-based scene representation, eliminating the object involves deleting Gaussians. However, a naïve removal often yields unsatisfactory results, particularly in the completely unobservable regions beneath the object. In this section, we first propose how to generate the ideal mask for inpainting as in Fig. 2(c). Then, we propose our time-reversal inpainting framework and how to use the inpainting results to re-optimize the 2DGS.
|
| 142 |
+
|
| 143 |
+
# 5.1 GENERATION OF IDEAL INPAINTING MASK
|
| 144 |
+
|
| 145 |
+
In the street video captured by a moving car, we can divide the pixel space into three categories: (1) The observable regions, where the regions are not occluded by any objects; (2) The partially observable regions, where the regions are occluded in some views but are observable in other views; (3) The completely unobservable regions, where the regions are unobservable in all recording views. For regions in the second case, we can utilize information from other views to preserve more information about the appearance of the street scene. As illustrated in Fig. 2, naively inpainting with the object mask will cause the unexpected blurry inpainting result at the partially observable region, which can be viewed from other viewpoints but is occluded from the current viewpoint.
|
| 146 |
+
|
| 147 |
+
To distinguish partially observable regions from completely unobservable regions and improve the inpainting quality, we propose using the rendered alpha map to generate the mask for completely unobservable regions. For a given viewpoint, we first remove the Gaussians of unwanted objects. For robustness, we remove other Gaussians that are "too close" to the previously removed Gaussians. Then we render the alpha map of the remaining scene. We identify the completely unobservable region via pixels with low alpha values. The pixels with alpha values lower than a threshold are selected as inpainting masks. The threshold is set as 0.99 in our implementation.
|
| 148 |
+
|
| 149 |
+
# 5.2 TIME-REVERSAL INPAINTING
|
| 150 |
+
|
| 151 |
+
The core challenge in reconstructing the empty street scene is ensuring consistency between different views over the long trajectory. However, current video inpainting methods cannot generalize to our long trajectory and complex scenarios, which can be validated from Tab. 1, Fig. 5, and supplementary video comparison. This usually lags behind the scale-up speed of image inpainting models. To this end, we propose using a reference-based image inpainting method that is trained to ensure consistency between the inpainted region and the reference-based image. Particularly, we adopt the LeftRefill Cao et al. (2024) for its stable diffusion-based backbone and the matching-based training strategy. The stable diffusion backbone leads to a more powerful inpainting model with a strong generation capacity in open-world scenarios, which fits the requirement of Street Unveiling. Furthermore, the matching-based training strategy ensures that the inpainting model correctly fulfills
|
| 152 |
+
|
| 153 |
+
the masked region based on the observation in the reference image, which encourages consistency between different views.
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
Figure 3: Illustration of reference-based inpainting of two views. Left: When we inpaint the near view with the far view as a reference, the consistency of the inpainting result degenerates. There are fewer matching pixels between the reference far-view image and near-view inpainting result; Right: Inpainting the far view using the near view as a reference results in better quality and more accurate pixel matching. It's easier to generate the low-resolution content with the high-resolution image as a reference.
|
| 157 |
+
|
| 158 |
+
However, time-forward inpainting sequences usually lead to the failure of consistent inpainting. Given the moving-forward nature of data-collecting vehicles, objects to be removed transit from far to near in the camera view. (1) As is illustrated in Fig. 3, when we use the far-view image as a reference to inpaint the same region in the near-view image, the models may not correctly capture the matching relationships and thus causing inconsistent inpainting. Conversely, setting the near-view image as the reference image leads to a more precise matching result and naturally better inpainting results. (2) The near-view image can capture more fine-grained information and a larger receptive field, thus making the inpainting easier to inpaint in a high-to-low resolution instead of low-to-high which requires extra super-resolution capacity for the inpainting model. Besides, the objects removed in the final frame are consistently observed in the earlier frames.
|
| 159 |
+
|
| 160 |
+
Based on the above analysis, we propose the time-reversal inpainting framework. If we reverse the time, we can turn the moving-forward nature into a moving-backward nature. When the time is reversed, objects to be removed will instead transition from near to far in the camera view because the camera will be away from the removed object in reversed time-space.
|
| 161 |
+
|
| 162 |
+
We target to unconditionally inpaint a 3D region only once and then transmit the inpainted region's pixels to other views with reference-based inpainting. As is illustrated in Fig. 4, we first unconditionally inpaint both frame $T_{n}$ and $T_{n + 1}$ with Cao et al. (2023). However, for frame $T_{n}$ , there are some regions that can be seen in $T_{n + 1}$ . We expect they would share more matching pixels by utilizing the implicit pixel-matching ability of reference-based inpainting model Cao et al. (2024). Then we use frame $T_{n + 1}$ as a reference to inpaint frame $T_{n}$ , masking only the regions visible in $T_{n + 1}$ . More implementation details are elaborated in Sec. A.2 of the supplementary.
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
Figure 4: Illustration of time-reversal inpainting. After we remove the Gaussians of the objects, we first unconditionally inpaint both frame $T_{n}$ and $T_{n + 1}$ with Cao et al. (2023). Then we transmit the pixels from frame $T_{n + 1}$ to frame $T_{n}$ in the form of reference-based inpainting Cao et al. (2024). From a high-level understanding, we inpaint the earlier frame $T_{n}$ with the later frame $T_{n + 1}$ as a reference condition.
|
| 166 |
+
|
| 167 |
+
# 5.3 RE-OPTIMIZATION OF THE 2D GAUSSIANS
|
| 168 |
+
|
| 169 |
+
Once we finish time reversal inpainting, we use our inpainting results as pseudo labels to guide the re-optimization of 2DGS Huang et al. (2024a). We use the following loss for re-optimization:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\mathcal {L} _ {\text {r e t r a i n}} = \mathcal {L} _ {1} + \lambda_ {d} \mathcal {L} _ {\mathrm {d}} + \lambda_ {n} \mathcal {L} _ {\mathrm {n}}. \tag {10}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
# 6 EXPERIMENTS
|
| 176 |
+
|
| 177 |
+
Our experiments were conducted on a single NVIDIA A40 GPU with peak memory usage of 16GB.
|
| 178 |
+
|
| 179 |
+
Dataset. For the evaluation of our approach from the reconstruction aspect and the object removal aspect, we adopt real-world street scenes from Waymo Open Perception Dataset Sun et al. (2020) and Pandaset Xiao et al. (2021). The Waymo dataset collects data from 5 camera perspectives, encompassing roughly 230 degrees in field of view (FOV). We downscale the resolution to $484 \times 320$ . The Pandaset collects data from 6 camera perspectives, encompassing 360 degrees in FOV. We downscale the resolution to $480 \times 270$ . We select front-view video sequences as the same experimental setup in Yan et al. (2024); Chen et al. (2023b); Zhou et al. (2024), using 24 scenes from Waymo and 9 scenes from Pandaset for our experiments.
|
| 180 |
+
|
| 181 |
+
Metrics. To evaluate the effectiveness of object removal, we approach it from a multi-view inpainting perspective. We follow the well-established previous works Mirzaei et al. (2023); Weder et al. (2023); Liu et al. (2024); Lin et al. (2024a), we calculate the LPIPS Zhang et al. (2018) and Fréchet Inception Distance (FID) Heusel et al. (2017) scores to quantify the discrepancies between the ground-truth views and removed results. Each output video frame is paired with the corresponding frame from the original training video to compute the LPIPS. We use the image collections of the output video and original training video to compute FID.
|
| 182 |
+
|
| 183 |
+
Baselines. We compare our approach to 3D inpainting method SPIn-NeRF Mirzaei et al. (2023) and a recent Gaussian Splitting based inpainting method Infusion Liu et al. (2024). As the original MLP implementation of SPIn-NeRF Mirzaei et al. (2023) works poorly in the large-scale street scene, we re-implement SPIn-NeRF Mirzaei et al. (2023) based on 2DGS Huang et al. (2024a), clarifying that our superiority not only from 2DGS but also the proposed time reversal inpainting. Infusion Liu et al. (2024) is evaluated with the official implementation. Since Infusion Liu et al. (2024) is designed for small scenes, it only conducts GS removal and projection once for the whole scene. Its original setting doesn't match our long-trajectory tasks. Instead, we conduct every 10 frames to fit our setting.
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
Figure 5: Qualitative comparison results of our methods. Our methods achieve clearer results than temporarily consistent inpainting baselines. Video comparisons will be placed in the supplementary.
|
| 187 |
+
|
| 188 |
+
<table><tr><td></td><td colspan="2">Waymo</td><td colspan="2">Pandaset</td></tr><tr><td></td><td>LPIPS↓</td><td>FID ↓</td><td>LPIPS↓</td><td>FID ↓</td></tr><tr><td colspan="5">Single Image Inpainting</td></tr><tr><td>LaMa(2D) Suvorov et al. (2021)</td><td>0.228</td><td>138.089</td><td>0.276</td><td>160.895</td></tr><tr><td>SDXL Podell et al. (2023)</td><td>0.231</td><td>116.634</td><td>0.276</td><td>133.042</td></tr><tr><td colspan="5">Video Inpainting</td></tr><tr><td>ProPainter Zhou et al. (2023a)</td><td>0.233</td><td>141.906</td><td>0.286</td><td>178.135</td></tr><tr><td colspan="5">3D Inpainting</td></tr><tr><td>SPIn-NeRF Mirzaei et al. (2023)</td><td>0.221</td><td>140.831</td><td>0.266</td><td>174.223</td></tr><tr><td colspan="5">(in 2DGS)</td></tr><tr><td>Infusion Liu et al. (2024)</td><td>0.307</td><td>176.882</td><td>0.325</td><td>176.882</td></tr><tr><td>Ours</td><td>0.216</td><td>127.581</td><td>0.261</td><td>155.527</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 1: Comparison with state-of-the-art 2D/3D inpainting methods on both two datasets. Our FID is only lower than SDXL, yet SDXL doesn't maintain consistency between different video frames.
|
| 191 |
+
|
| 192 |
+
<table><tr><td></td><td colspan="2">Waymo</td><td colspan="2">Pandaset</td></tr><tr><td></td><td>LPIPS↓</td><td>FID ↓</td><td>LPIPS↓</td><td>FID ↓</td></tr><tr><td colspan="5">Ablation of different pseudo labels</td></tr><tr><td>w/LaMa Suvorov et al. (2021)</td><td>0.226</td><td>137.753</td><td>0.281</td><td>169.836</td></tr><tr><td>w/SDXL Podell et al. (2023)</td><td>0.222</td><td>139.716</td><td>0.279</td><td>169.805</td></tr><tr><td>w/ProPainter Zhou et al. (2023a)</td><td>0.224</td><td>138.944</td><td>0.281</td><td>169.848</td></tr><tr><td colspan="5">Ablation of 3D representation</td></tr><tr><td>w/3DGS Kerbl et al. (2023)</td><td>0.219</td><td>140.749</td><td>0.280</td><td>161.203</td></tr><tr><td>Time-Forward Inpainting</td><td>0.220</td><td>136.858</td><td>0.270</td><td>158.166</td></tr><tr><td>Ours</td><td>0.216</td><td>127.581</td><td>0.261</td><td>155.527</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 2: Quantitative ablation study on both two datasets over different 2D inpainting methods for 3D inpainting. And the ablation study over different 3D representations. The comparison verifies the effectiveness of the time-reversal inpainting pipeline and the necessity of the 2DGS representation.
|
| 195 |
+
|
| 196 |
+
# 6.1 COMPARISON
|
| 197 |
+
|
| 198 |
+
The quantitative comparison results are shown in Tab. 1, and the qualitative comparison of 3D inpainting methods are shown in Fig. 5. Noticed that SPIn-NeRF Mirzaei et al. (2023) utilizes LaMa Suvorov et al. (2021) and Infusion Liu et al. (2024) utilizes SDXL Podell et al. (2023) for inpainting. We can observe that 3D inpainting baseline methods lead to worse results, especially when the case is challenging. The results demonstrate that our proposed method achieves better 3D inpainting results from the appearance aspect. The geometry property of the removed region will be discussed in the supplementary. Video comparisons will also be included in the supplementary. In Tab. 1, our proposed method outperforms all the baselines in LPIPS. It only achieves a lower FID compared to SDXL, yet SDXL doesn't maintain consistency between different video frames. This can be easily observed from supplementary videos.
|
| 199 |
+
|
| 200 |
+
# 6.2 FURTHER ANALYSIS
|
| 201 |
+
|
| 202 |
+
Ablation of different inpainting methods as pseudo labels. We compare the reconstruction results with pseudo labels from different inpainting methods. From Fig. 6, we can observe that time reversal will maintain the consistency between View 1 and View 2. Current single image inpainting models, like LaMa Suvorov et al. (2021) and SDXL Podell et al. (2023), fail to maintain the consistency over the video frames. Although the video inpainting models Zhou et al. (2023a) can be temporarily consistent at near frames, the whole inpainting region will be blurred since it can not guarantee 3D consistency. The quantitative results in Tab. 2 verify the effectiveness of our time-reversal pipeline.
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
Figure 6: Ablation for different inpainting methods as pseudo labels. From the top two rows, we can observe that time-reversal inpainting is able to achieve more consistent inpainting results than other methods. The bottom row shows that our method can achieve better 3D inpainting results.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 7: Ablation for 3D representation. When we use 3DGS, generating an ideal inpainting mask with a rendered alpha map is hard. Good inpainting results are hard to achieve.
|
| 209 |
+
|
| 210 |
+
Ablation of 3D representation. We ablate through the 3D representation by comparing the results obtained with 3DGS Kerbl et al. (2023) and 2DGS Huang et al. (2024a). From Fig. 7, we can observe that after we remove the Gaussians, the rendered alpha map with 3DGS fails to generate an ideal inpainting mask. The quantitative results given in Tab. 2 verify the necessity of 2DGS representation.
|
| 211 |
+
|
| 212 |
+
Ablation of time-reversal inpainting. For our time-reversal inpainting, we conduct inpainting on frame $T_{n}$ with $T_{n + 1}$ as reference. We additionally ablate the time order in our inpainting progress. For time-forward inpainting, frame $T_{n + 1}$ is inpainted with frame $T_{n}$ as reference. Tab. 2 quantitatively demonstrates the necessity of the time-reversal order. We provide a more comprehensive discussion and qualitative illustration in Sec. B.1 of the supplementary.
|
| 213 |
+
|
| 214 |
+
# 7 CONCLUSION
|
| 215 |
+
|
| 216 |
+
We propose StreetUnveiler, a pipeline for reconstructing empty streets from in-car camera videos. Our method represents the street scene using a hard-label semantic-aware 2D Gaussian Splitting Huang et al. (2024a), allowing us to remove each instance from the scene seamlessly. To create an ideal inpainting mask, we utilize the rendered alpha map after removing unwanted 2DGS. Additionally, we introduce a novel time-reversal inpainting framework that enhances consistency across different viewpoints, facilitating the reconstruction of empty streets. Extensive experiments demonstrate that our method effectively reconstructs empty street scenes and supports free-viewpoint rendering.
|
| 217 |
+
|
| 218 |
+
# 8 ACKNOWLEDGEMENT
|
| 219 |
+
|
| 220 |
+
The work was supported by NSFC #62172279, #61932020, Program of Shanghai Academic Research Leader. Shuo Wang, Binbin Huang, Zibo Zhao are greatly appreciated for valuable proofreading.
|
| 221 |
+
|
| 222 |
+
# REFERENCES
|
| 223 |
+
|
| 224 |
+
Dragomir Anguelov, Carole Dulong, Daniel Filip, Christian Frueh, Stephane Lafon, Richard Lyon, Abhijit Ogale, Luc Vincent, and Josh Weaver. Google street view: Capturing the world at street level. Computer, 43, 2010.
|
| 225 |
+
Chong Bao, Yinda Zhang, and Bangbang et al. Yang. Sine: Semantic-driven image-based nerf editing with prior-guided editing field. In CVPR, pp. 20919-20929, 2023.
|
| 226 |
+
Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 417-424, 2000.
|
| 227 |
+
Chenjie Cao, Qiaole Dong, and Yanwei Fu. Zits++: Image inpainting by improving the incremental transformer on structural priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
|
| 228 |
+
Chenjie Cao, Yunuo Cai, Qiaole Dong, Yikai Wang, and Yanwei Fu. Leftrefill: Filling right canvas based on left reference through generalized text-to-image diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
|
| 229 |
+
Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision (ECCV), 2022.
|
| 230 |
+
Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. Gaussianeditor: Swift and controllable 3d editing with gaussian splatting, 2023a.
|
| 231 |
+
Yurui Chen, Chun Gu, Junzhe Jiang, Xiatian Zhu, and Li Zhang. Periodic vibration gaussian: Dynamic urban scene reconstruction and real-time rendering. arXiv:2311.18561, 2023b.
|
| 232 |
+
Kai Cheng, Xiaoxiao Long, Kaizhi Yang, Yao Yao, Wei Yin, Yuexin Ma, Wenping Wang, and Xuejin Chen. Gaussianpro: 3d gaussian splatting with progressive propagation. arXiv preprint arXiv:2402.14650, 2024.
|
| 233 |
+
Chong Bao and Bangbang Yang, Zeng Junyi, Bao Hujun, Zhang Yinda, Cui Zhaopeng, and Zhang Guofeng. Neumesh: Learning disentangled neural mesh-based implicit field for geometry and texture editing. In European Conference on Computer Vision (ECCV), 2022.
|
| 234 |
+
Zhiwen Fan, Wenyan Cong, Kairun Wen, Kevin Wang, Jian Zhang, Xinghao Ding, Danfei Xu, Boris Ivanovic, Marco Pavone, Georgios Pavlakos, Zhangyang Wang, and Yue Wang. Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds, 2024.
|
| 235 |
+
Jiemin Fang, Junjie Wang, Xiaopeng Zhang, Lingxi Xie, and Qi Tian. Gaussianeditor: Editing 3d gaussians delicately with text instructions, 2023.
|
| 236 |
+
Xiao Fu, Shangzhan Zhang, Tianrun Chen, Yichong Lu, Lanyun Zhu, Xiaowei Zhou, Andreas Geiger, and Yiyi Liao. Panoptic nerf: 3d-to-2d label transfer for panoptic urban scene segmentation. In Proceedings of the International Conference on 3D Vision (3DV), 2022.
|
| 237 |
+
Haoyu Guo, Sida Peng, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, and Xiaowei Zhou. Neural 3d scene reconstruction with the manhattan-world assumption. In CVPR, 2022.
|
| 238 |
+
Jianfei Guo, Nianchen Deng, Xinyang Li, Yeqi Bai, Botian Shi, Chiyu Wang, Chenjing Ding, Dongliang Wang, and Yikang Li. Streetsurf: Extending multi-view implicit surface reconstruction to street views. arXiv preprint arXiv:2306.04988, 2023.
|
| 239 |
+
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 6629-6640, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.
|
| 240 |
+
Yuan-Ting Hu, Heng Wang, Nicolas Ballas, Kristen Grauman, and Alexander G Schwing. Proposal-based video completion. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXVII 16, pp. 38-54. Springer, 2020.
|
| 241 |
+
|
| 242 |
+
Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. In SIGGRAPH 2024 Conference Papers. Association for Computing Machinery, 2024a. doi: 10.1145/3641519.3657428.
|
| 243 |
+
Nan Huang, Xiaobao Wei, Wenzhao Zheng, Pengju An, Ming Lu, Wei Zhan, Masayoshi Tomizuka, Kurt Keutzer, and Shanghang Zhang. S3gaussian: Self-supervised street gaussians for autonomous driving. arXiv preprint arXiv:2405.20323, 2024b.
|
| 244 |
+
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4), July 2023. URL https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/.
|
| 245 |
+
Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. Leref: Language embedded radiance fields. In International Conference on Computer Vision (ICCV), 2023.
|
| 246 |
+
Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Deep video inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5792-5801, 2019.
|
| 247 |
+
Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2022.
|
| 248 |
+
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4015-4026, October 2023.
|
| 249 |
+
Sosuke Kobayashi, Eiichi Matsumoto, and Vincent Sitzmann. Decomposing nerf for editing via feature field distillation. In Advances in Neural Information Processing Systems, volume 35, 2022. URL https://arxiv.org/pdf/2205.15585.pdf.
|
| 250 |
+
Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas Guibas, Andrea Tagliasacchi, Frank Dellaert, and Thomas Funkhouser. Panoptic neural fields: A semantic object-aware neural scene representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
|
| 251 |
+
Christoph Lassner and Michael Zollhofer. Pulsar: Efficient sphere-based neural rendering. In CVPR, 2021.
|
| 252 |
+
Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, and Ming-Ming Cheng. Towards an end-to-end framework for flow-guided video inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17562-17571, 2022.
|
| 253 |
+
Chieh Hubert Lin, Changil Kim, Jia-Bin Huang, Qinbo Li, Chih-Yao Ma, Johannes Kopf, Ming-Hsuan Yang, and Hung-Yu Tseng. Taming latent diffusion model for neural radiance field inpainting. In European Conference on Computer Vision (ECCV), 2024a.
|
| 254 |
+
Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, and Wenming Yang. Vastgaussian: Vast 3d gaussians for large scene reconstruction. In CVPR, 2024b.
|
| 255 |
+
Guilin Liu, Kevin J Shih, Ting-Chun Wang, Fitsum A Reda, Karan Sapra, Zhiding Yu, Andrew Tao, and Bryan Catanzaro. Partial convolution based padding. *Arxiv*, 2018.
|
| 256 |
+
Hongyu Liu, Bin Jiang, Yi Xiao, and Chao Yang. Coherent semantic attention for image inpainting. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4170-4179, 2019.
|
| 257 |
+
Qiankun Liu, Zhentao Tan, Dongdong Chen, Qi Chu, Xiyang Dai, Yinpeng Chen, Mengchen Liu, Lu Yuan, and Nenghai Yu. Reduce information loss in transformers for pluralistic image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11347-11357, 2022.
|
| 258 |
+
Steven Liu, Xiuming Zhang, Zhoutong Zhang, Richard Zhang, Jun-Yan Zhu, and Bryan Russell. Editing conditional radiance fields, 2021.
|
| 259 |
+
|
| 260 |
+
Zhiheng Liu, Hao Ouyang, Qiuyu Wang, Ka Leong Cheng, Jie Xiao, Kai Zhu, Nan Xue, Yu Liu, Yujun Shen, and Yang Cao. Infusion: Inpainting 3d gaussians via learning depth completion from diffusion prior. arXiv preprint arXiv:2404.11613, 2024.
|
| 261 |
+
Fan Lu, Yan Xu, Guang Chen, Hongsheng Li, Kwan-Yee Lin, and Changjun Jiang. Urban radiance field representation with deformable neural mesh primitives. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
|
| 262 |
+
Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H Kim, and Johannes Kopf. Progressively optimized local radiance fields for robust view synthesis. In Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, pp. 16539-16548, 2023.
|
| 263 |
+
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision (ECCV), 2020.
|
| 264 |
+
Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G. Derpanis, Jonathan Kelly, Marcus A. Brubaker, Igor Gilitschenski, and Alex Levinshtein. SPIn-NeRF: Multiview segmentation and perceptual inpainting with neural radiance fields. In CVPR, 2023.
|
| 265 |
+
Ashkan Mirzaei, Riccardo De Lutio, Seung Wook Kim, David Acuna, Jonathan Kelly, Sanja Fidler, Igor Gilitschenski, and Zan Gojcic. Refusion: Reference adapted diffusion models for 3d scene inpainting, 2024. URL https://arxiv.org/abs/2404.10765.
|
| 266 |
+
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):102:1-102:15, July 2022. doi: 10.1145/3528223.3530127. URL https://doi.org/10.1145/3528223.3530127.
|
| 267 |
+
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2536-2544, 2016.
|
| 268 |
+
Songyou Peng, Kyle Genova, Chiyu "Max" Jiang, Andrea Tagliasacchi, Marc Pollefeys, and Thomas Funkhouser. Openscene: 3d scene understanding with open vocabularies. 2023.
|
| 269 |
+
Julien Philip and George Drettakis. Plane-based multi-view inpainting for image-based rendering in large scenes. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D), 2018.
|
| 270 |
+
Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis, 2023.
|
| 271 |
+
Kira Prabhu, Jane Wu, Lynn Tsai, Peter Hedman, Dan B Goldman, Ben Poole, and Michael Broxton. Inpaint3d: 3d scene content generation using 2d inpainting diffusion, 2023. URL https://arxiv.org/abs/2312.03869.
|
| 272 |
+
Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlp's. In International Conference on Computer Vision (ICCV), 2021.
|
| 273 |
+
Konstantinos Rematas, Andrew Liu, Pratul P. Srinivasan, Jonathan T. Barron, Andrea Tagliasacchi, Tom Funkhouser, and Vittorio Ferrari. Urban radiance fields. CVPR, 2022.
|
| 274 |
+
Jingjing Ren, Qingqing Zheng, Yuanyuan Zhao, Xuemiao Xu, and Chen Li. Dlformer: Discrete latent transformer for video inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3511-3520, 2022.
|
| 275 |
+
Kerui Ren, Lihan Jiang, Tao Lu, Mulin Yu, Linning Xu, Zhangkai Ni, and Bo Dai. Octree-gs: Towards consistent real-time rendering with lod-structured 3d gaussians, 2024.
|
| 276 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
|
| 277 |
+
|
| 278 |
+
Sara Fridovich-Keil and Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In CVPR, 2022.
|
| 279 |
+
Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
|
| 280 |
+
Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In European Conference on Computer Vision (ECCV), 2016.
|
| 281 |
+
Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Norman Müller, Matthias Nießner, Angela Dai, and Peter Kontschieder. Panoptic lifting for 3d scene understanding with neural fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9043-9052, June 2023.
|
| 282 |
+
Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. CVPR, 2022.
|
| 283 |
+
Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Yu Zhang, Jonathon Shlens, Zhifeng Chen, and Dragomir Anguelov. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 284 |
+
Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Armenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor S. Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3172-3182, 2021. URL https://apisemantic scholar.org/CorpusID:237513361.
|
| 285 |
+
Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul Srinivasan, Jonathan T. Barron, and Henrik Kretzschmar. Block-NeRF: Scalable large scene neural view synthesis. arXiv, 2022.
|
| 286 |
+
Theo Thonat, Eli Shechtman, Sylvain Paris, and George Drettakis. Multi-view inpainting for image-based scene editing and rendering. In Proceedings of the International Conference on 3D Vision (3DV), 2016.
|
| 287 |
+
Haithem Turki, Deva Ramanan, and Mahadev Satyanarayanan. Mega-nerf: Scalable construction of large-scale nerfs for virtual fly-throughs. In CVPR, pp. 12922–12931, June 2022.
|
| 288 |
+
Haithem Turki, Jason Y Zhang, Francesco Ferroni, and Deva Ramanan. Suds: Scalable urban dynamic scenes. In Computer Vision and Pattern Recognition (CVPR), 2023.
|
| 289 |
+
Ziyu Wan, Jingbo Zhang, Dongdong Chen, and Jing Liao. High-fidelity pluralistic image completion with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4692-4701, 2021.
|
| 290 |
+
Chuan Wang, Haibin Huang, Xiaoguang Han, and Jue Wang. Video inpainting by jointly learning temporal structure and spatial details. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp. 5232-5239, 2019.
|
| 291 |
+
Dongqing Wang, Tong Zhang, Alaa Abboud, and Sabine Susstrunk. Inpaintnerf360: Text-guided 3d inpainting on unbounded neural radiance fields. arXiv, 2023a.
|
| 292 |
+
Peng Wang, Yuan Liu, Zhaoxi Chen, Lingjie Liu, Ziwei Liu, Taku Komura, Christian Theobalt, and Wenping Wang. F2-nerf: Fast neural radiance field training with free camera trajectories. CVPR, 2023b.
|
| 293 |
+
Yifan Wang, Andrew Liu, Richard Tucker, Jiajun Wu, Brian L. Curless, Steven M. Seitz, and Noah Snively. Repopulating street scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
|
| 294 |
+
|
| 295 |
+
Yikai Wang, Chenjie Cao, Ke Fan, Qiaole Dong, Yifan Li, Xiangyang Xue, and Yanwei Fu. Repositioning the subject within image. Transactions on Machine Learning Research, 2024a. ISSN 2835-8856. URL https://openreview.net/forum?id=orHH4fCtR8.
|
| 296 |
+
Yikai Wang, Chenjie Cao, and Ke Fan Xiangyang Xue Yanwei Fu. Towards context-stable and visual-consistent image inpainting, 2024b.
|
| 297 |
+
Yuxin Wang, Qianyi Wu, Guofeng Zhang, and Dan Xu. Gscream: Learning 3d geometry and feature consistent gaussian splattering for object removal. In ECCV, 2024c.
|
| 298 |
+
Zian Wang, Tianchang Shen, Jun Gao, Shengyu Huang, Jacob Munkberg, Jon Hasselgren, Zan Gojcic, Wenzheng Chen, and Sanja Fidler. Neural fields meet explicit geometric representations for inverse rendering of urban scenes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2023c.
|
| 299 |
+
Ethan Weber, Aleksander Holynski, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, and Angjoo Kanazawa. Nerfiller: Completing scenes via generative 3d inpainting. In CVPR, 2024.
|
| 300 |
+
Silvan Weder, Guillermo Garcia-Hernando, Aron Monszpart, Marc Pollefeys, Gabriel Brostow, Michael Firman, and Sara Vicente. Removing objects from neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
|
| 301 |
+
Tiang Xiang, Adam Sun, Jiajun Wu, Ehsan Adeli, and Fei-Fei Li. Rendering humans from object-occluded monocular videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2023.
|
| 302 |
+
Pengchuan Xiao, Zhenlei Shao, Steven Hao, Zishuo Zhang, Xiaolin Chai, Judy Jiao, Zesong Li, Jian Wu, Kai Sun, Kun Jiang, Yunlong Wang, and Dange Yang. Pandaset: Advanced sensor suite dataset for autonomous driving. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 3095-3101. IEEE Press, 2021. doi: 10.1109/ITSC48978.2021.9565009. URL https://doi.org/10.1109/ITSC48978.2021.9565009.
|
| 303 |
+
Yuting Xiao, Jingwei Xu, Zehao Yu, and Shenghua Gao. Debsdf: Delving into the details and bias of neural indoor scene reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024.
|
| 304 |
+
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. In Neural Information Processing Systems (NeurIPS), 2021.
|
| 305 |
+
Linning Xu, Yuanbo Xiangli, Sida Peng, Xingang Pan, Nanxuan Zhao, Christian Theobalt, Bo Dai, and Dahua Lin. Grid-guided neural radiance fields for large urban scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
|
| 306 |
+
Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, and Ulrich Neumann. Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5438-5448, 2022.
|
| 307 |
+
Rui Xu, Xiaoxiao Li, Bolei Zhou, and Chen Change Loy. Deep flow-guided video inpainting. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 308 |
+
Yunzhi Yan, Haotong Lin, Chenxu Zhou, Weijie Wang, Haiyang Sun, Kun Zhan, Xianpeng Lang, Xiaowei Zhou, and Sida Peng. Street gaussians: Modeling dynamic urban scenes with gaussian splatting. In ECCV, 2024.
|
| 309 |
+
Bangbang Yang, Wenqi Dong, Lin Ma, Wenbo Hu, Xiao Liu, Zhaopeng Cui, and Yuewen Ma. Dreamspace: Dreaming your room space with text-driven panoramic texture propagation. 2023a.
|
| 310 |
+
Ze Yang, Yun Chen, Jingkang Wang, Sivabalan Manivasagam, Wei-Chiu Ma, Anqi Joyce Yang, and Raquel Urtasun. Unisim: A neural closed-loop sensor simulator. In CVPR, 2023b.
|
| 311 |
+
Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. Gaussian grouping: Segment and edit anything in 3d scenes, 2023.
|
| 312 |
+
|
| 313 |
+
Zili Yi, Qiang Tang, Shekoofeh Azizi, Daesik Jang, and Zhan Xu. Contextual residual aggregation for ultra high-resolution image inpainting. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), pp. 7508-7517, 2020.
|
| 314 |
+
Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5505-5514, 2018.
|
| 315 |
+
Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Free-form image inpainting with gated convolution. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4471-4480, 2019.
|
| 316 |
+
Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. Advances in Neural Information Processing Systems (NeurIPS), 2022.
|
| 317 |
+
Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. Nerf-editing: Geometry editing of neural radiance fields. In CVPR, 2022.
|
| 318 |
+
Zefeng Yuan, Hengyu Li, Jingyi Liu, and Jun Luo. Multiview scene image inpainting based on conditional generative adversarial networks. IEEE Transactions on Intelligent Vehicles, 5(2), June 2020.
|
| 319 |
+
Kai Zhang, Nick Kolkin, Sai Bi, Fujun Luan, Zexiang Xu, Eli Shechtman, and Noah Snavely. Arf: Artistic radiance fields, 2022.
|
| 320 |
+
Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3836-3847, 2023a.
|
| 321 |
+
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 586-595, 2018. doi: 10.1109/CVPR.2018.00068.
|
| 322 |
+
Xiaoshuai Zhang, Abhijit Kundu, Thomas Funkhouser, Leonidas Guibas, Hao Su, and Kyle Genova. Nerflets: Local radiance fields for efficient structure-aware 3d scene representation from 2d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023b.
|
| 323 |
+
Shengyu Zhao, Jonathan Cui, Yilun Sheng, Yue Dong, Xiao Liang, I Eric, Chao Chang, and Yan Xu. Large scale image completion via co-modulated generative adversarial networks. In International Conference on Learning Representations, 2020.
|
| 324 |
+
Yiqun Zhao, Chenming Wu, Binbin Huang, Yihao Zhi, Chen Zhao, Jingdong Wang, and Shenghua Gao. Surfel-based gaussian inverse rendering for fast and reightable dynamic human reconstruction from monocular video. arXiv preprint arXiv:2407.15212, 2024a.
|
| 325 |
+
Yiqun Zhao, Zibo Zhao, Jing Li, Sixun Dong, and Shenghua Gao. Roomdesigner: Encoding anchor-latents for style-consistent and shape-compatible indoor scene generation. In Proceedings of the International Conference on 3D Vision (3DV), 2024b.
|
| 326 |
+
Zibo Zhao, Wen Liu, Yanyu Xu, Xianing Chen, Weixin Luo, Lei Jin, Bohui Zhu, Tong Liu, Binqiang Zhao, and Shenghua Gao. Prior based human completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7951-7961, 2021.
|
| 327 |
+
Hongyu Zhou, Jiahao Shao, Lu Xu, Dongfeng Bai, Weichao Qiu, Bingbing Liu, Yue Wang, Andreas Geiger, and Yiyi Liao. Hugs: Holistic urban 3d scene understanding via gaussian splatting, 2024.
|
| 328 |
+
Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3D: A modern library for 3D data processing. arXiv:1801.09847, 2018.
|
| 329 |
+
Shangchen Zhou, Chongyi Li, Kelvin C.K Chan, and Chen Change Loy. ProPainter: Improving propagation and transformer for video inpainting. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2023a.
|
| 330 |
+
|
| 331 |
+
Shijie Zhou, Haoran Chang, Sicheng Jiang, Zhiwen Fan, Zehao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, and Achuta Kadambi. Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields. arXiv preprint arXiv:2312.03203, 2023b.
|
| 332 |
+
Xueyan Zou, Linjie Yang, Ding Liu, and Yong Jae Lee. Progressive temporal feature alignment network for video inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16448-16457, 2021.
|
| 333 |
+
|
| 334 |
+
# A IMPLEMENTATION DETAILS
|
| 335 |
+
|
| 336 |
+
# A.1 DETAILS OF HARD-LABEL SEMANTIC 2DGS RECONSTRUCTION
|
| 337 |
+
|
| 338 |
+
Initialization with Lidar points. High-quality appearance and semantic reconstruction of the whole street scene are hard to reach, with barely SFM points Schonberger & Frahm (2016); Schonberger et al. (2016) as initialization for street scenes. Lidar points are leveraged to better reconstruct the street scene like in Yan et al. (2024); Chen et al. (2023b); Zhou et al. (2024). We use an off-the-self 2D semantic segmenter Xie et al. (2021) to process the 2D images and back-project the hard semantic labels to 2D Gaussians Huang et al. (2024a).
|
| 339 |
+
|
| 340 |
+
Environment map for street reconstruction. We empirically find that most 2D Gaussians' opacity will be larger than 0.9 or lower than 0.1, leading to the imperfect reconstruction quality of the background environment, i.e., sky. To better model the environment in the street scene, we employ a tiny MLP $f$ to query the color of the environment map, which is similar to Guo et al. (2023); Turki et al. (2023). The queried environment color at $\mathbf{x}$ is denoted as $\mathbf{c}_{\mathrm{env}}$ . The final color of the ray is obtained by blending the color of 2DGS projection and the environment map as follows:
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
\mathbf {c} _ {\text {e n v}} (\mathbf {x}) = f (\mathbf {M}, \mathbf {x}) \quad \mathbf {c} _ {\text {f i n a l}} (\mathbf {x}) = \mathbf {c} (\mathbf {x}) + (1 - \alpha (\mathbf {x})) \mathbf {c} _ {\text {e n v}} (\mathbf {x}) \tag {11}
|
| 344 |
+
$$
|
| 345 |
+
|
| 346 |
+
where $\mathbf{M}$ denotes the projection matrix from world coordinates to pixel coordinates. $\alpha (\mathbf{x})$ is the rendered alpha map of 2DGS rendering.
|
| 347 |
+
|
| 348 |
+
# Details of two-stage reconstruction training
|
| 349 |
+
|
| 350 |
+
The optimization of our designed 2DGS Huang et al. (2024a) reconstruction for street scenes contains two stages. (1) In the first stage, we employ adaptive density control of 3DGS Kerbl et al. (2023). And $\mathcal{L}_{\mathrm{d}}$ , $\mathcal{L}_{\mathrm{n}}$ and $\mathcal{L}_{\mathrm{ds}}$ will be deactivated to reach a more stable initialization of 2DGS reconstruction. (2) In the second stage, $\mathcal{L}_{\mathrm{d}}$ , $\mathcal{L}_{\mathrm{n}}$ and $\mathcal{L}_{\mathrm{ds}}$ is activated. As empirically, most 2D Gaussians' opacity will be larger than 0.9 or lower than 0.1. The noisy 2DGS with the wrong semantic label will be optimized as low opacity through $\mathcal{L}_{\mathrm{ds}}$ . We prune the Gaussians with opacity lower than a threshold $\epsilon$ to further eliminate the noisy semantics in the 3D world, with $\epsilon$ set as 0.3 in our experiments.
|
| 351 |
+
|
| 352 |
+
# A.2 DETAILS OF TIME-REVERSAL INPAINTING FRAMEWORK
|
| 353 |
+
|
| 354 |
+
As is mentioned in Wang et al. (2024b), when we are using a latent-diffusion-based inpainting model, there will be non-ignorable shifts in low-frequency fields if we use images decoded by KL-VAE Kingma & Welling (2022); Rombach et al. (2022) repeatedly for different times. Given that our method can be summarised as inpainting frame $T_{i}$ with $T_{i + 1}$ as a reference through LeftRefill Cao et al. (2024), which is latent-diffusion-based. For a whole sequence of video, if we simply iteratively inpaint every $T_{i}$ with $T_{i + 1}$ as a reference, the shifts in low-frequency fields will be badly augmented by KL-VAE, which will severely harm the quality of our 2D inpainting guidance. To alleviate this inevitable shift from the KL-VAE of the latent diffusion model Rombach et al. (2022). We first select some keyframes in the video. Then we use time-reversal inpainting to inpaint the selected keyframes iteratively in the reversed time sequence.
|
| 355 |
+
|
| 356 |
+
We firstly select the keyframes of timestamps $\{T_{k_1},\dots ,T_{k_n}\}$ , and we start to inpainting all the keyframes in the reversed time sequence. After we inpaint the keyframe $T_{k_i}$ , we generate the middle frames between keyframe $T_{k_{i + 1}}$ and keyframe $T_{k_i}$ with keyframe $T_{k_i}$ as reference image. Per-image processing follows Fig. 4. Finally, we will use these results as pseudo-labeled data to further re-optimize the 2DGS of the empty street scene.
|
| 357 |
+
|
| 358 |
+
To achieve more precise scene optimization, we first identify the 2DGS for removal using hard semantic labels. After removing these 2DGS, we restrict the re-optimization stage to only update Gaussians that are spatially close to the removed ones, ensuring targeted refinement.
|
| 359 |
+
|
| 360 |
+
# B MORE EXPERIMENTS
|
| 361 |
+
|
| 362 |
+
# B.1 ABLATION OF TIME-FORWARD INPAINTING AND TIME-REVERSAL INPAINTING
|
| 363 |
+
|
| 364 |
+
To further validate the effectiveness of time-reversal inpainting, we do an additional ablation in Sec. 6.2 with time-forward inpainting, which is the reverse version of our proposed time-reversal
|
| 365 |
+
|
| 366 |
+
inpainting. In Tab. 3 and Tab. 2, our time-reversal inpainting achieves better quantitative results than time-forward inpainting.
|
| 367 |
+
|
| 368 |
+
For our time-reversal inpainting, we inpaint frame $T_{n}$ with $T_{n + 1}$ as reference. For time-forward inpainting, frame $T_{n + 1}$ is inpainted with frame $T_{n}$ as reference. The Fig. 8 elaborates the details about the process of these two methods. The qualitative comparison in Fig.9 showcases the high-to-low-resolution nature of time-reversal inpainting, which will enhance the quality of the results.
|
| 369 |
+
|
| 370 |
+

|
| 371 |
+
|
| 372 |
+

|
| 373 |
+
Figure 8: Illustration of the difference between time-reversal inpainting and time-forward inpainting on inpainting strategy. We first use unconditional inpainting to inpaint the frame $T_{n+1}$ . For our time-reversal inpainting, we inpaint frame $T_{n}$ with $T_{n+1}$ as reference. For time-forward inpainting, frame $T_{n+1}$ is inpainted with frame $T_{n}$ as reference.
|
| 374 |
+
Figure 9: Illustration of the qualitative comparison between time-reversal inpainting and time-forward inpainting. We can observe that the quality of time-forward inpainting results degenerates because time-forward inpainting uses a low-to-high-resolution approach. This requires extra super-resolution capacity for the inpainting model to get a better result. However, our time-reversal inpainting uses a high-to-low-resolution approach. High-resolution content will better guide the low-resolution content.
|
| 375 |
+
|
| 376 |
+
# B.2 ABLATION OF HARD SEMANTIC LABEL
|
| 377 |
+
|
| 378 |
+
We additionally ablate the effectiveness of the hard semantic label. From Fig. 11, we can observe that both 2DGS representation and hard semantic label contribute to a more stable reconstruction of the semantic field.
|
| 379 |
+
|
| 380 |
+
The comparison between (a) and (b) demonstrates that the use of hard semantic labels effectively reduces noise within the semantic fields. In addition, the comparison between (a) and (c) indicates
|
| 381 |
+
|
| 382 |
+
<table><tr><td></td><td colspan="2">Waymo</td><td colspan="2">Pandaset</td></tr><tr><td></td><td>LPIPS↓</td><td>FID ↓</td><td>LPIPS↓</td><td>FID ↓</td></tr><tr><td>Time-Forward Inpainting</td><td>0.220</td><td>136.858</td><td>0.270</td><td>158.166</td></tr><tr><td>Time-Reversal Inpainting(Ours)</td><td>0.216</td><td>127.581</td><td>0.261</td><td>155.527</td></tr></table>
|
| 383 |
+
|
| 384 |
+
Table 3: Quantitative ablation of time-reversal inpainting and time-forward inpainting. The result validates the effectiveness of our method.
|
| 385 |
+
|
| 386 |
+
<table><tr><td></td><td colspan="2">Waymo</td><td colspan="2">Pandaset</td></tr><tr><td></td><td>LPIPS↓</td><td>FID ↓</td><td>LPIPS↓</td><td>FID ↓</td></tr><tr><td>LeftRefill Cao et al. (2024)</td><td>0.227</td><td>135.421</td><td>0.288</td><td>168.112</td></tr><tr><td>Ours</td><td>0.216</td><td>127.581</td><td>0.261</td><td>155.527</td></tr></table>
|
| 387 |
+
|
| 388 |
+
Table 4: Quantitative comparison with LeftRefill Cao et al. (2024). The result validates the effectiveness of our method.
|
| 389 |
+
|
| 390 |
+
that the 2DGS representation leads to more stable semantic fields. Finally, (d) illustrates the clean and stable semantic field achieved by employing hard-label semantic 2DGS in our method.
|
| 391 |
+
|
| 392 |
+
By reconstructing a clean and stable semantic field of the street scene, we can more accurately identify the Gaussians that need to be removed. This facilitates obtaining a high-quality 2D inpainting result, which serves as effective guidance for re-optimization.
|
| 393 |
+
|
| 394 |
+
# B.3 COMPARISON WITH LEFTREFILL
|
| 395 |
+
|
| 396 |
+
We additionally discuss the qualitative comparison with LeftRefill Cao et al. (2024) as another baseline. Since LeftRefill requires an image as a reference, LeftRefill can't be naturally run as unconditional inpainting methods like LAMA Suvorov et al. (2021) and SDXL Podell et al. (2023) in Tab. 1. We adapt LeftRefill with the 10th future frame as a condition and use the mask obtained after our reconstruction stage. LeftRefill is also operated in a reverse order.
|
| 397 |
+
|
| 398 |
+
From Tab. 4, we can observe that our time-reversal inpainting pipeline generates better results than LeftRefill. From Fig. 10, we observe that due to this adapted naive reverse inpainting with LeftRefill taking the future frame as a reference, some regions are not visible in the future frame. This limitation can lead to low-quality inpainting, as highlighted in the red frame of LeftRefill's result. In contrast, our pipeline generates a more natural inpainting result through 2DGS re-optimization, ultimately achieving a clear 3D inpainting result.
|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
Figure 10: Illustration of comparison with LeftRefill Cao et al. (2024). Since naive reverse inpainting with LeftRefill takes the future frame as a reference, some regions are not visible in the future frame. We observe that this will lead to a low quality of inpainting, as highlighted in the red frame of LeftRefill's result. Our pipeline generates a more natural inpainting result for 2DGS re-optimization and finally obtains a clear 3D inpainting result.
|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
|
| 405 |
+

|
| 406 |
+
(a) 3DGS w/soft label
|
| 407 |
+
(b) 3DGS w/hard label
|
| 408 |
+
|
| 409 |
+

|
| 410 |
+
(c) 2DGS w/soft label
|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
(d) 2DGS w/hard label(Ours)
|
| 414 |
+
Figure 11: Illustration of hard semantic label ablation. Black rectangles in the figures include the noise in the semantic fields. The comparison between (a) and (b) demonstrates that hard semantic labels effectively reduce noise within semantic fields. Similarly, the comparison between (a) and (c) indicates that the 2DGS representation contributes to more stable semantic fields. Finally, (d) illustrates the clean and stable semantic field achieved by employing hard-label semantic 2DGS in our method.
|
| 415 |
+
|
| 416 |
+
<table><tr><td>Dataset(Frames)</td><td>Reconstruction</td><td>Our Inpaint</td><td>Re-optimize</td></tr><tr><td>Waymo(198)</td><td>10016 sec</td><td>1035 sec</td><td>524 sec</td></tr><tr><td>Pandaset(80)</td><td>9640 sec</td><td>311 sec</td><td>257 sec</td></tr></table>
|
| 417 |
+
|
| 418 |
+
Table 5: Computational cost of each stage with our pipeline(in seconds): We evaluate on both Waymo and Pandaset. We use 8 scenes from each dataset and average their time consumption in experiments. "Reconstruction" is the main efficiency bottleneck in our pipeline.
|
| 419 |
+
|
| 420 |
+
<table><tr><td>Method</td><td>Waymo (198)</td><td>Pandaset (80)</td></tr><tr><td>LaMa Suvorov et al. (2021)</td><td>0.20 sec</td><td>0.28 sec</td></tr><tr><td>SDXL Podell et al. (2023)</td><td>5.18 sec</td><td>5.21 sec</td></tr><tr><td>Propainter Zhou et al. (2023a)</td><td>0.59 sec</td><td>0.53 sec</td></tr><tr><td>Ours</td><td>5.22 sec</td><td>5.09 sec</td></tr></table>
|
| 421 |
+
|
| 422 |
+
Table 6: Total per-frame time cost comparison from 2D inpainting aspect: We use 8 scenes from each dataset and average their time consumption in experiments. Our method is comparable to SDXL for per-frame time cost.
|
| 423 |
+
|
| 424 |
+
# B.4 QUALITATIVE COMPARISON OF RENDERED GEOMETRY
|
| 425 |
+
|
| 426 |
+
Since we want to reconstruct the empty street, we also want to compare the geometry property of our method other than just appearance. From Fig. 14, Fig. 15, Fig. 16, Fig. 17, we can observe that our method produces both better appearance quality and geometry quality from rendered RGB and rendered normal images.
|
| 427 |
+
|
| 428 |
+
# B.5 COMPUTATIONAL ANALYSIS
|
| 429 |
+
|
| 430 |
+
We additionally conduct a computational analysis of each stage in our pipeline and per-frame inpainting time cost. From Tab. 5, we observe that the "Reconstruction" stage is the main efficiency bottleneck in our pipeline. By utilizing recent techniques like Instantsplat Fan et al. (2024), our whole pipeline has the potential to be accelerated and may reach the result of reconstructing an empty street in 30 minutes. From Tab. 6, our pipeline isn't inferior to the Diffusion-based Podell et al. (2023) method from the efficiency aspect.
|
| 431 |
+
|
| 432 |
+
# C ADDITIONAL RESULTS
|
| 433 |
+
|
| 434 |
+
# C.1 EMPTY STREET SCENE MESH EXTRACTION
|
| 435 |
+
|
| 436 |
+
We can further extract the mesh for our reconstructed empty street scene using TSDF fusion following 2DGS Huang et al. (2024a) with Open3D Zhou et al. (2018). In Fig. 19 and Fig. 20, we visualize the extracted colored mesh before and after our unveiling. Our inpainting framework can successfully remove unwanted cars from the street and finally reconstruct an empty street in mesh representation. Mesh extraction results further verify the correct geometry produced through our method.
|
| 437 |
+
|
| 438 |
+
We clarify that our problem formulation is not exactly the same as StreetSurf Guo et al. (2023). The target of our method is to reconstruct the empty street. However, the extracted mesh and reconstructed street of StreetSurf is the original static scene "before unveiled".
|
| 439 |
+
|
| 440 |
+
Another key difference is that our setting lacks ground-truth "after unveiled" training data for both Lidar and images. StreetSurf relies on ground-truth "before unveiled" data for both training and evaluation.
|
| 441 |
+
|
| 442 |
+
While we are still able to evaluate the reconstructed "before unveiled" scenes to compare with StreetSurf, which will provide meaningful insights for future work.
|
| 443 |
+
|
| 444 |
+
Following StreetSurf, we utilize the Lidar data under the real-world scale for geometry evaluation. In StreetSurf, the extracted mesh may include some parts outside of the scene. To ensure a fair
|
| 445 |
+
|
| 446 |
+
<table><tr><td>Method</td><td>CD↓</td><td>F-Score↑</td></tr><tr><td>StreetSurf Guo et al. (2023)</td><td>0.52</td><td>56.70</td></tr><tr><td>Ours(before unveiled)</td><td>0.55</td><td>61.54</td></tr></table>
|
| 447 |
+
|
| 448 |
+
<table><tr><td>Geometry-related input</td><td>Lidar</td><td>Monocular
|
| 449 |
+
Prior</td></tr><tr><td>StreetSurf Guo et al. (2023)</td><td>✓</td><td>✓</td></tr><tr><td>Ours(before unveiled)</td><td>✓</td><td></td></tr></table>
|
| 450 |
+
|
| 451 |
+
Table 7: Comparison of the reconstruction performance on "before unveiled" geometry with StreetSurf Guo et al. (2023) on 24 scenes from Waymo dataset Sun et al. (2020). Left: We observe that our Chamfer Distance is higher than StreetSurf, while F-Score is higher than StreetSurf. The reconstruction performance of observed geometry appears to be comparable. Right: The checkbox of the geometry-related input data. We run StreetSurf with both monocular prior and Lidar as input. While in our approach, we operate without relying on the monocular geometry prior.
|
| 452 |
+
|
| 453 |
+
comparison, we crop these out-of-range meshes from the extracted mesh. Specifically, any mesh that is more than 5 meters away from the closest Lidar point will be cropped.
|
| 454 |
+
|
| 455 |
+
For the experimental setup, we select both Chamfer Distance(CD) and F-score with a 0.25-meter threshold as our geometry evaluation metrics. We evaluate StreetSurf using both monocular geometry priors and Lidar as geometry-related inputs, while our method is tested exclusively with Lidar as the geometry-related input.
|
| 456 |
+
|
| 457 |
+
From Tab. 7, we observe that our Chamfer Distance is greater than that of StreetSurf, while our F-Score surpasses StreetSurf's. From Fig. 12, we empirically observe that the StreetSurf achieves higher accuracy on the observable ground of the street. Even after fair mesh cropping, the reconstructed mesh of StreetSurf is still affected by out-of-range meshes to some extent. This results from the inherent methodological differences between SDF extraction and TSDF fusion, which may lead to a slightly lower F-Score for the SDF extraction approach. Overall, the reconstruction performance of the observed geometry appears to be comparable.
|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
StreetSurf
|
| 461 |
+
|
| 462 |
+

|
| 463 |
+
Ours
|
| 464 |
+
Figure 12: Illustration of the "before unveiled" geometry comparison with StreetSurf Guo et al. (2023). "Green" indicates "more accurate" regions, while "Red" represents "less accurate" regions. We empirically observe that the StreetSurf achieves higher accuracy on the observable ground of the street. Even after fair mesh cropping, the reconstructed mesh of StreetSurf is still influenced by out-of-range meshes to some extent. This effect stems from inherent methodological differences between SDF extraction and TSDF fusion, which may contribute to a slightly lower F-Score for the SDF extraction approach. Overall, the reconstruction performances are comparable.
|
| 465 |
+
|
| 466 |
+
# C.2 EXAMPLE OF REMOVING THE STANDING PEDESTRIAN
|
| 467 |
+
|
| 468 |
+
As in Fig. 21, we highlight an example of removing the standing pedestrian from the scene.
|
| 469 |
+
|
| 470 |
+
# C.3 MORE VISUAL COMPARISON
|
| 471 |
+
|
| 472 |
+
We provide additional qualitative comparisons of inpainting results for the Pandaset dataset Xiao et al. (2021) in Fig. 24.
|
| 473 |
+
|
| 474 |
+
# C.4 VIDEO VISUALIZATIONS
|
| 475 |
+
|
| 476 |
+
In order to conveniently view our video results, we prepare a web viewer at ".index.html" from the root path of the supplementary materials.
|
| 477 |
+
|
| 478 |
+
# C.4.1 NOVEL VIEW SYNTHESIS VIDEOS VISUALIZATIONS
|
| 479 |
+
|
| 480 |
+
As in Fig. 23, we showcase two novel view synthesis videos. The file paths are illustrated.
|
| 481 |
+
|
| 482 |
+
# C.4.2 MOREVIDEO VISUALIZATIONS
|
| 483 |
+
|
| 484 |
+
As in Fig. 25, we visualize three scenes involved in Tab.1 for better comparison. The file paths are illustrated. From video comparison, it can be observed that our method outperforms other baselines.
|
| 485 |
+
|
| 486 |
+
# D DISCUSSION ON SEMANTIC LABEL SUPERVISION
|
| 487 |
+
|
| 488 |
+
We use SegFormer Xie et al. (2021) as the pre-trained model for segmentation, and our results appear to be robust as a whole. Both our quantitative and qualitative results show that our method is stable with SegFormer.
|
| 489 |
+
|
| 490 |
+
While it is inevitable that certain failure cases occur with small objects or object corners, these challenges are common across most segmentation methods. The example cases are illustrated in Fig. 13. As segmentation techniques continue to evolve, our method is poised to benefit and improve alongside them.
|
| 491 |
+
|
| 492 |
+
# E DISCUSSION ON DYNAMIC OBJECT REMOVAL
|
| 493 |
+
|
| 494 |
+
We illustrate a case to handle simple dynamic object removal in Fig. 22, which can be observed in the scene 3(a moving car) in the supplementary videos. For more challenging dynamic cases, utilizing optical flow or dynamic object detection over the video sequence to identify dynamic objects from 2D may be a considerable solution.
|
| 495 |
+
|
| 496 |
+
Alternatively, we can model dynamic Gaussians in street scenes like Chen et al. (2023b) and Yan et al. (2024) for more challenging dynamic object removal.
|
| 497 |
+
|
| 498 |
+
The separation of dynamic and static elements may also facilitate the removal of dynamic objects from scenes. Current methods, such as StreetGaussians Yan et al. (2024) (which utilizes 3D box-based scene decomposition) and S3Gaussian Huang et al. (2024b) (a self-supervised approach to scene decomposition), aim to distinguish between dynamic and static components. However, these methods may not consistently differentiate static removable objects (like stopping cars) from essential scene elements (like traffic signs).
|
| 499 |
+
|
| 500 |
+
# F DISCUSSION ON $360^{\circ}$ OR WIDE RANGE SURROUNDING VIDEOS
|
| 501 |
+
|
| 502 |
+
Our time-reversal inpainting pipeline works for "forward-facing" cameras. Thus, we used frontal cameras for the experiment, which is the same as the experimental setup in Yan et al. (2024); Chen et al. (2023b); Zhou et al. (2024). We found the inpainting results of the frontal view sufficient for recovering a 3D unveiled street with satisfactory geometry properties. For left-back and right-back(side and back cameras), our method doesn't naturally promise to maintain the consistency of the inpainting. The homography technique may help maintain consistency between different views by leveraging the overlaps in images across those views.
|
| 503 |
+
|
| 504 |
+
# G SOCIETAL IMPACTS
|
| 505 |
+
|
| 506 |
+
This technology can distort public space representations in urban planning, potentially leading to flawed decisions. Additionally, it may be misused to alter important archaeological sites in digital reconstructions, resulting in misinformation about historical facts.
|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
Figure 13: For the semantic segmentation mask predicted by SegFormer Xie et al. (2021) in original full resolution, some noise exists at the boundaries between regions with different semantic tags. These failure cases are more likely to occur at the corners of small objects.
|
| 510 |
+
|
| 511 |
+

|
| 512 |
+
Figure 14: Illustration of geometry performance comparison.
|
| 513 |
+
|
| 514 |
+

|
| 515 |
+
Figure 15: Illustration of geometry performance comparison.
|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
Figure 16: Illustration of geometry performance comparison.
|
| 519 |
+
|
| 520 |
+

|
| 521 |
+
Figure 17: Illustration of geometry performance comparison.
|
| 522 |
+
|
| 523 |
+

|
| 524 |
+
Figure 18: Illustration of geometry performance comparison.
|
| 525 |
+
|
| 526 |
+

|
| 527 |
+
Figure 19: Illustration of colored mesh comparison between before and after the unveiling.
|
| 528 |
+
|
| 529 |
+

|
| 530 |
+
|
| 531 |
+

|
| 532 |
+
Figure 20: Illustration of colored mesh comparison between before and after the unveiling.
|
| 533 |
+
|
| 534 |
+

|
| 535 |
+
Training data
|
| 536 |
+
|
| 537 |
+

|
| 538 |
+
Our unveiled
|
| 539 |
+
Figure 21: Illustration of removing the standing pedestrian.
|
| 540 |
+
|
| 541 |
+

|
| 542 |
+
Training data
|
| 543 |
+
|
| 544 |
+

|
| 545 |
+
Our unveiled
|
| 546 |
+
|
| 547 |
+

|
| 548 |
+
Figure 22: Illustration of removing simple dynamic case.
|
| 549 |
+
|
| 550 |
+

|
| 551 |
+
Novel View Synthesis Videos
|
| 552 |
+
./static/videos/NVS/nvs1.mp4
|
| 553 |
+
Figure 23: Illustration of novel view synthesis videos and their file paths. It's recommended to open our web viewer located at "/index.html".
|
| 554 |
+
|
| 555 |
+

|
| 556 |
+
./static/videos/NVS/nvs2.mp4
|
| 557 |
+
|
| 558 |
+

|
| 559 |
+
Figure 24: More qualitative comparison on Pandaset dataset Xiao et al. (2021). Our method produced a clearer result of the ground and trees behind the removed object compared to the baselines.
|
| 560 |
+
|
| 561 |
+

|
| 562 |
+
Figure 25: Illustration of video comparison with baseline and their file paths. It's recommended to open our web viewer located at "/index.html". "data_vs_ours.mp4" shows our results compared with training data for visualization. "infusion_vs_ours.mp4" shows both RGB and normal results between Infusion and our method. "lama_vs_ours.mp4" shows our results compared to LaMa. "propainter_vs_ours.mp4" shows our results compared to ProPainter, which is a state-of-the-art video inpainting method. "sdxl_vs_ours.mp4" shows our results compared to SDXL. "spin_vs_ours.mp4" shows both RGB and normal results between SPIn-NeRF in 2DGS representation and our method.
|
3dstreetunveilerwithsemanticaware2dgsasimplebaseline/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3674a3c52b52134ed54e2c6070db2651ae871566c3b0b87c6310386a1f1893b9
|
| 3 |
+
size 3173180
|
3dstreetunveilerwithsemanticaware2dgsasimplebaseline/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4b4f2f1e15cd9cda2b127df77175fb0634fe81eb90d66df5a07f131f87fdb408
|
| 3 |
+
size 710641
|
3dtrajmastermastering3dtrajectoryformultientitymotioninvideogeneration/676ebc6a-2e43-43ba-bc5e-c1ea226e8932_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62c372ba249a67a1f14e801f607a4c15bb740c542ecd682a32fc51a2abfe26af
|
| 3 |
+
size 107335
|
3dtrajmastermastering3dtrajectoryformultientitymotioninvideogeneration/676ebc6a-2e43-43ba-bc5e-c1ea226e8932_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d63244e595bfbec6764eb28bc57989ecd5156de08bc655e508cdfc23d8110eb4
|
| 3 |
+
size 130267
|