Add Batch fda7a2d5-4c7b-431e-81da-63a102504a46
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/eebebb2c-dd4f-4c4d-bbde-d36484d550d9_content_list.json +3 -0
- CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/eebebb2c-dd4f-4c4d-bbde-d36484d550d9_model.json +3 -0
- CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/eebebb2c-dd4f-4c4d-bbde-d36484d550d9_origin.pdf +3 -0
- CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/full.md +417 -0
- CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/images.zip +3 -0
- CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/layout.json +3 -0
- CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/07d97fcd-beb6-4db8-9d15-727f79d06230_content_list.json +3 -0
- CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/07d97fcd-beb6-4db8-9d15-727f79d06230_model.json +3 -0
- CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/07d97fcd-beb6-4db8-9d15-727f79d06230_origin.pdf +3 -0
- CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/full.md +295 -0
- CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/images.zip +3 -0
- CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/layout.json +3 -0
- CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/64d1b96f-e41e-4ee5-88fe-92864a01628b_content_list.json +3 -0
- CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/64d1b96f-e41e-4ee5-88fe-92864a01628b_model.json +3 -0
- CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/64d1b96f-e41e-4ee5-88fe-92864a01628b_origin.pdf +3 -0
- CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/full.md +287 -0
- CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/images.zip +3 -0
- CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/layout.json +3 -0
- CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/d97dc35c-1298-4ebe-af81-4eec695a335e_content_list.json +3 -0
- CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/d97dc35c-1298-4ebe-af81-4eec695a335e_model.json +3 -0
- CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/d97dc35c-1298-4ebe-af81-4eec695a335e_origin.pdf +3 -0
- CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/full.md +423 -0
- CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/images.zip +3 -0
- CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/layout.json +3 -0
- CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/19b2c42f-ce51-44ed-b899-64702962cff1_content_list.json +3 -0
- CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/19b2c42f-ce51-44ed-b899-64702962cff1_model.json +3 -0
- CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/19b2c42f-ce51-44ed-b899-64702962cff1_origin.pdf +3 -0
- CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/full.md +328 -0
- CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/images.zip +3 -0
- CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/layout.json +3 -0
- CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/319bbfa4-f317-493b-a350-5e2125370b5a_content_list.json +3 -0
- CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/319bbfa4-f317-493b-a350-5e2125370b5a_model.json +3 -0
- CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/319bbfa4-f317-493b-a350-5e2125370b5a_origin.pdf +3 -0
- CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/full.md +318 -0
- CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/images.zip +3 -0
- CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/layout.json +3 -0
- CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/a31f06fe-0eb5-4f9e-ab73-37feb4a38682_content_list.json +3 -0
- CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/a31f06fe-0eb5-4f9e-ab73-37feb4a38682_model.json +3 -0
- CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/a31f06fe-0eb5-4f9e-ab73-37feb4a38682_origin.pdf +3 -0
- CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/full.md +440 -0
- CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/images.zip +3 -0
- CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/layout.json +3 -0
- CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/56588c3b-cd81-42c5-a2a1-c7cdd5a9d9cc_content_list.json +3 -0
- CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/56588c3b-cd81-42c5-a2a1-c7cdd5a9d9cc_model.json +3 -0
- CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/56588c3b-cd81-42c5-a2a1-c7cdd5a9d9cc_origin.pdf +3 -0
- CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/full.md +355 -0
- CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/images.zip +3 -0
- CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/layout.json +3 -0
- CVPR/2025/iSegMan_ Interactive Segment-and-Manipulate 3D Gaussians/41e89e4d-a900-4b4c-acdd-4a77ec356f1d_content_list.json +3 -0
- CVPR/2025/iSegMan_ Interactive Segment-and-Manipulate 3D Gaussians/41e89e4d-a900-4b4c-acdd-4a77ec356f1d_model.json +3 -0
CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/eebebb2c-dd4f-4c4d-bbde-d36484d550d9_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb8559e0b6ca565e7fc489f91a6d6cb73fb2ee36c645a29591aaf560b326ff6c
|
| 3 |
+
size 92636
|
CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/eebebb2c-dd4f-4c4d-bbde-d36484d550d9_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:de81c115e27fdc74d86deed864471a123bed1c9a3a77dc2a528d78d15bcbc970
|
| 3 |
+
size 118465
|
CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/eebebb2c-dd4f-4c4d-bbde-d36484d550d9_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f41f25e3b2bfe0f56c077ebfbe2ca0ec780972f6fce4b3cfbf7f69921802bfb
|
| 3 |
+
size 8213762
|
CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/full.md
ADDED
|
@@ -0,0 +1,417 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping
|
| 2 |
+
|
| 3 |
+
Shun Iwase $^{1,2}$ Muhammad Zubair Irshad $^{2}$ Katherine Liu $^{2}$ Vitor Guizilini $^{2}$ Robert Lee $^{3}$ Takuya Ikeda $^{3}$ Ayako Amma $^{3}$ Koichi Nishiwaki $^{3}$ Kris Kitani $^{1}$ Rares Ambrus $^{2}$ Sergey Zakharov $^{2}$ $^{1}$ Carnegie Mellon University $^{2}$ Toyota Research Institute $^{3}$ Woven by Toyota
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Robotic grasping is a cornerstone capability of embodied systems. Many methods directly output grasps from partial information without modeling the geometry of the scene, leading to suboptimal motion and even collisions. To address these issues, we introduce ZeroGrasp, a novel framework that simultaneously performs 3D reconstruction and grasp pose prediction in near real-time. A key insight of our method is that occlusion reasoning and modeling the spatial relationships between objects is beneficial for both accurate reconstruction and grasping. We couple our method with a novel large-scale synthetic dataset, which comprises 1M photo-realistic images, high-resolution 3D reconstructions and 11.3B physically-valid grasp pose annotations for 12K objects from the Objaverse-LVIS dataset. We evaluate ZeroGrasp on the GraspNet-1B benchmark as well as through real-world robot experiments. ZeroGrasp achieves state-of-the-art performance and generalizes to novel real-world objects by leveraging synthetic data. https://sh8.io/#/zerograsp
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
Safe and robust robotic grasping requires accurate geometric understanding of target objects, as well as their surroundings. However, most previous grasp detection methods [1-6] do not explicitly model the geometry of the target objects, which can lead to unexpected collisions and unstable contact with target objects. Although several methods [3, 7] leverage multi-view images to reconstruct the target objects in advance, this process introduces additional computational overhead and requires a more complex setup. Multi-view reconstruction is also often impractical for objects placed within confined spaces like shelves or boxes. Furthermore, the lack of large-scale datasets with ground-truth 3D shapes and grasp pose annotations further complicates accurate 3D reconstruction from a single RGB-D im
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
(a) RGB Image
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
(b) Noisy Depth Map
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
(c) 3D Reconstruction and Predicted Grasp Poses
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
Figure 1. ZeroGrasp simultaneously reconstructs objects at high-resolution and predicts grasp poses from a single RGB-D image in near real-time (5FPS).
|
| 24 |
+
|
| 25 |
+
age. Recently, several works [8-10] demonstrate that sparse voxel representations outperform volumetric and NeRF-like implicit shape representations in terms of runtime, accuracy, and resolution, particularly for regression-based zero-shot 3D reconstruction.
|
| 26 |
+
|
| 27 |
+
To leverage reconstruction methods using sparse voxel representations for robotic grasping, it is essential to develop new approaches that can reason about both within a unified framework. To this end, we propose ZeroGrasp, a novel framework for near real-time 3D reconstruction and 6D grasp pose prediction. Our key hypothesis is that improved 3D reconstruction quality enhances grasp pose prediction, specifically by leveraging physics-based contact constraints and collision detection, which are essential for accurate grasping. Since robotic environments often involve
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
Figure 2. Overview of ZeroGrasp, a novel method for simultaneous 3D reconstruction and 6D grasp pose predictions from a single-view RGB-D image. The input octree $\mathbf{x}$ is first fed into the octree-based CVAE (components with orange boxes). The multi-object encoder takes its latent feature $\ell$ to learn multi-object reasoning at the latent space. Further, 3D occlusion fields encode inter- and self-occlusion information via simple ray casting. The output features from the multi-object encoder and 3D occlusion fields are concatenated with the latent code $\mathbf{z}$ , and 3D shapes and grasp poses are predicted by the decoder.
|
| 31 |
+
|
| 32 |
+
multiple objects with inter-object occlusions and close contacts, ZeroGrasp introduces two key components: a multi-object encoder and 3D occlusion fields. These components effectively model inter-object relationships and occlusions, thus, improving reconstruction quality. In addition, we design a simple refinement algorithm to improve grasp poses using the predicted reconstruction. Because the reconstruction is highly accurate, it provides reliable contact points and collision masks between the gripper and the target object, which we use to refine the grasp poses.
|
| 33 |
+
|
| 34 |
+
In addition to our proposed model, we also create a real-world dataset for evaluation, the ReOcS dataset, and a synthetic datasets for training, the ZeroGrasp-11B dataset. The ReOcS dataset is a real-world evaluation dataset of 3D reconstruction, with three splits representing different degrees of occlusion. We use this dataset to assess robustness to occlusions. The ZeroGrasp-11B dataset is a large-scale synthetic dataset designed to train models with zero-shot robotic grasping capability, containing high-quality and diverse 3D models from Objverse-LVIS dataset [11], as shown in Table 1.
|
| 35 |
+
|
| 36 |
+
We evaluate both the baseline and our methods, showing that our approach — trained on the GraspNet-1B dataset [1] alone, as well as on a combination of the GraspNet-1B dataset and ZeroGrasp-11B — achieves state-of-the-art performance on the GraspNet-1B benchmark. Our ablation studies further show that the proposed components enhance both reconstruction and grasp pose prediction quality. Finally, we conduct real-robot evaluations to demonstrate its generalizability in real-world scenarios.
|
| 37 |
+
|
| 38 |
+
Our contributions are summarized as follows:
|
| 39 |
+
|
| 40 |
+
- We propose ZeroGrasp, a novel framework for simultaneous 3D reconstruction and 6D grasp pose prediction using an octree-based conditional variational autoencoder
|
| 41 |
+
|
| 42 |
+
(CVAE). ZeroGrasp achieves the best performance on the GraspNet-1B benchmark and real-robot evaluation.
|
| 43 |
+
|
| 44 |
+
- We introduce a multi-object encoder and 3D occlusion fields to model inter-object relationships and occlusions.
|
| 45 |
+
- We propose a simple grasp pose refinement algorithm that improves grasp accuracy using the reconstructed 3D shape.
|
| 46 |
+
- We create two datasets, 1) the ReOcS dataset for evaluating 3D reconstruction under occlusions, and 2) the ZeroGrasp-11B dataset for zero-shot robotic grasping.
|
| 47 |
+
|
| 48 |
+
# 2. Related Works
|
| 49 |
+
|
| 50 |
+
Regression-based 3D reconstruction. Regression-based 3D reconstruction from a single-view RGB-D image [8, 20-47] have been a major focus of research in 3D computer vision. These methods explore different 3D representations, including dense voxel grids [23, 31, 39, 48], sparse voxel grids [8, 9, 49] (e.g. octree [9], VDB [49], hash table [8], and etc.), and implicit representations [20, 33, 34, 38]. Nevertheless, dense voxel grid and implicit representations face limitations in output resolution due to expensive memory and computational costs. On the other hand, several works [9, 20, 21, 49] show that sparse voxel representations such as an octree and VDB [50] enable high-resolution 3D reconstruction with faster runtime thanks to its efficient hierarchical structure. Alternatively, single-view reconstruction through novel view synthesis achieves impressive results. Recent works such as GeNVS [51], Zero-1-to-3 [52], 3DiM [53], and InstantMesh [54] leverage diffusion models to render multi-view images given a canonical camera pose. However, these approaches are slow (often over 10 seconds) and inter-object occlusions degrade the performance significantly. Further, integrating grasp pose prediction is nontrivial. Thus, we adopt an octree as a shape representation
|
| 51 |
+
|
| 52 |
+
Table 1. Dataset comparisons. We create a large-scale grasp detection dataset for zero-shot robotic grasping using 12K 3D models from Objaverse-LVIS dataset [11]. Our ZeroGrasp-11B dataset includes 1 million RGB-D images and physics-based dense 6D grasp annotations of cluttered scenes.
|
| 53 |
+
|
| 54 |
+
<table><tr><td>Dataset</td><td># Images</td><td># 3D Models</td><td># Grasps</td><td># Cat.</td><td>Type</td><td>Modality</td><td>Grasp Alg.</td><td>Grasp Rep.</td></tr><tr><td>Cornel [2]</td><td>1K</td><td>0.2K</td><td>8K</td><td>16</td><td>Real</td><td>RGB-D</td><td>Manual</td><td>Planar</td></tr><tr><td>Jacquard [12]</td><td>54K</td><td>11K</td><td>1.1M</td><td>N/A</td><td>Sim.</td><td>RGB-D</td><td>Physics</td><td>Planar</td></tr><tr><td>Zhang et al. [13]</td><td>4.7K</td><td>≈15K</td><td>100K</td><td>N/A</td><td>Real</td><td>RGB</td><td>Manual</td><td>Planar</td></tr><tr><td>VR-Grasping-101 [14]</td><td>10K</td><td>0.1K</td><td>4.8M</td><td>7</td><td>Sim.</td><td>RGB-D</td><td>Manual</td><td>6D</td></tr><tr><td>GraspNet-1Billion [1]</td><td>97K</td><td>0.1K</td><td>1.2B</td><td>30-35</td><td>Real</td><td>RGB-D</td><td>Analytical</td><td>6D</td></tr><tr><td>ACRONYM [15]</td><td>N/A</td><td>9K</td><td>17.7M</td><td>262</td><td>Sim.</td><td>N/A</td><td>Physics</td><td>6D</td></tr><tr><td>REGRAD [16]</td><td>900K</td><td>50K</td><td>100M</td><td>55</td><td>Sim.</td><td>N/A</td><td>Physics</td><td>6D</td></tr><tr><td>HouseCat6D [17]</td><td>23.5K</td><td>0.2K</td><td>10M</td><td>10</td><td>Real</td><td>RGB-D+P</td><td>Physics</td><td>6D</td></tr><tr><td>Grasp-Anything-6D [18]</td><td>1M</td><td>N/A</td><td>200M</td><td>N/A</td><td>Synth.</td><td>RGB + ZoeDepth [19]</td><td>Analytical</td><td>6D</td></tr><tr><td>ZeroGrasp-11B (Ours)</td><td>1M</td><td>12K</td><td>11.3B</td><td>606</td><td>Sim.</td><td>RGB-D</td><td>Physics</td><td>6D</td></tr></table>
|
| 55 |
+
|
| 56 |
+
and design our framework based on octree-based U-Net [9].
|
| 57 |
+
|
| 58 |
+
Regression-based Grasp Pose Prediction. Traditional grasp pose prediction methods typically assume prior knowledge of 3D objects and often rely on simplified analytical models based on force closure principles [55, 56]. Recently, tremendous progress has been made in learning-based approaches [1, 6, 57, 58] which have allowed models to predict 6D grasp poses directly from RGB(-D) images and point clouds. This has enabled the regression of grasp poses in highly cluttered scenes without explicitly modeling object geometries. However, this could result in unstable grasping and unexpected collision, as accurately learning collision avoidance and precise contact points remains challenging. Although some methods [42, 59, 60] explore 3D reconstruction to improve grasp prediction, their choices of shape representations and network architectures often limit its full potential.
|
| 59 |
+
|
| 60 |
+
Zero-shot robotic grasping. Zero-shot robotic grasping refers to the ability to grasp unseen target objects without prior knowledge. To achieve this, there are mainly two directions — (1) optimizing grasp poses at test time based on contact points using reconstructed or ground-truth 3D shapes [3, 61], and (2) augmenting or synthesizing large-scale grasp data to improve generalization [1, 15, 62]. For instance, Ma et al. [3] propose a contact-based optimization algorithm to refine initial grasp poses by using a reconstructed 3D scene from multi-view RGB-D images. Existing large-scale grasp pose datasets such as ACRONYM [15], GraspNet-1B [1], and EGAD [62] explore the second approach. Nevertheless, they are limited to object diversity or missing annotations like RGB-D images. Inspired by these two approaches, we aim to improve generalization to unseen objects with a simple and efficient grasp pose refinement algorithm that utilizes predicted reconstructions. Further, we create a large-scale synthetic dataset for grasp pose detection. Our dataset comprises
|
| 61 |
+
|
| 62 |
+
high-quality and diverse objects, as well as 1M photorealistic RGB images and physics-based grasp pose annotations.
|
| 63 |
+
|
| 64 |
+
# 3. Proposed Method
|
| 65 |
+
|
| 66 |
+
Our goal is to build an efficient and generalizable model for simultaneous 3D shape reconstruction and grasp pose prediction from a single RGB-D observation, and to demonstrate that the predicted reconstructions can be used to refine grasp poses via contact-based constraints and collision detection. In this section, we describe the network architecture and grasp pose refinement algorithm.
|
| 67 |
+
|
| 68 |
+
3D shape representation. We adopt an octree as a shape representation where attributes such as image features, the signed distance function (SDF), normals, and grasp poses are defined at the deepest level of the octree. For instance, we represent an input octree as a tuple of voxel centers $\mathbf{p}$ at the final depth, associated with image features $\mathbf{f}$ ,
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\mathbf {x} = (\mathbf {p}, \mathbf {f}), \mathbf {p} \in \mathbb {R} ^ {N \times 3}, \mathbf {f} \in \mathbb {R} ^ {N \times D}. \tag {1}
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
where $N$ is the number of voxels. Unlike point clouds, an octree structure enables efficient depth-first search and recursive subdivision to octants, making it ideal for high-resolution shape reconstruction and dense grasp pose prediction in a memory and computationally efficient manner.
|
| 75 |
+
|
| 76 |
+
Grasp pose representation. We represent grasp poses using a general two-finger parallel gripper model, as used in GraspNet [1]. Specifically, our grasp poses consist of the following components: view graspsness score $\mathbf{s} \in \mathbb{R}^M$ , which indicates the robustness of grasp positions [57]; quality $\mathbf{q} \in \mathbb{R}^M$ , computed using the force closure algorithm [55]; view direction $\mathbf{v} \in \mathbb{R}^{3M}$ ; angle $\mathbf{a} \in \mathbb{R}^M$ ; width $\mathbf{w} \in \mathbb{R}^M$ ; and depth $\mathbf{d} \in \mathbb{R}^M$ :
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\boldsymbol {g} = \left[ \begin{array}{l l l l l l} \mathbf {s} & \mathbf {q} & \mathbf {v} & \mathbf {a} & \mathbf {w} & \mathbf {d} \end{array} \right], \tag {2}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $M$ denotes the number of total grasps in the target octree, and the closest grasp poses within a $5\mathrm{mm}$ radius
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
Figure 3. 3D occlusion fields localize occlusion information by casting rays from the camera to the voxel centers around the target object and performing depth tests. Specifically, if a ray intersects the target object, a self-occlusion flag $o_{\mathrm{self}}$ is set to 1. If it intersects non-target objects, an inter-object occlusion flag $o_{\mathrm{inter}}$ is set to 1.
|
| 86 |
+
|
| 87 |
+
are assigned to each point. If none exists, we set its corresponding graspness to 0. In GraspNet-1B and ZeroGrasp-11B datasets, each point is annotated with a dense set of grasp poses covering all combinations of views, angles, and depths $(300 \times 12 \times 4)$ . With the grasp poses $\mathbf{g}$ , the target octree is defined as
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathbf {y} = \left(\mathbf {p} ^ {g t}, \mathbf {f} ^ {g t}\right) = \left(\mathbf {p} ^ {g t}, \left[ \begin{array}{l l l} \phi & \mathbf {n} & \mathbf {g} \end{array} \right]\right), \qquad (3)
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $\phi \in \mathbb{R}^M$ is the SDF, and $\mathbf{n}\in \mathbb{R}^{M\times 3}$ is normal vectors of the target octree.
|
| 94 |
+
|
| 95 |
+
# 3.1. Architecture
|
| 96 |
+
|
| 97 |
+
Given input octrees $\mathbf{x}$ , composed of per-instance partial point clouds derived from depth maps and instance masks, along with their corresponding image features, we aim to predict 3D reconstructions and grasp poses $\hat{\mathbf{y}}$ represented as octrees. ZeroGrasp is built upon an octree-based U-Net [9] and conditional variational autoencoder (CVAE) [63] to model shape reconstruction uncertainty and grasp pose prediction, while maintaining near real-time inference. We present two key innovations to improve its accuracy and generalization. Specifically, we introduce (1) multi-object encoder to model spatial relations between objects via a 3D transformer in the latent space, enabling collision-free 3D reconstructions and grasp poses, and (2) 3D occlusion fields, a novel 3D occlusion representation which captures inter-object occlusions to enhance shape reconstruction in occluded regions.
|
| 98 |
+
|
| 99 |
+
Octree feature extraction. An RGB image $\mathbf{I} \in \mathbb{R}^{H \times W \times 3}$ is encoded to extract an image feature $\mathbf{W}$ . We fine-tune SAM 2 [64] to generate 2D instance masks $\mathbf{M} \in \mathbb{R}^{H \times W}$ and $\mathbf{M}_i$ represents an $i$ -th object mask. The image feature map is then unprojected into 3D space by $(\mathbf{q}_i, \mathbf{w}_i) = \pi^{-1}(\mathbf{W}, \mathbf{D}, \mathbf{K}, \mathbf{M}_i)$ where $\mathbf{q}_i$ and $\mathbf{w}_i$ denote
|
| 100 |
+
|
| 101 |
+
3D point cloud and its corresponding features of an $i$ -th object, respectively. Here, $\pi$ is the unprojection function, $\mathbf{D} \in \mathbb{R}^{H \times W}$ is the depth map and $\mathbf{K} \in \mathbb{R}^{3 \times 3}$ denotes the camera intrinsics. The 3D point cloud features are converted to an octree $\mathbf{x}_i = (\mathbf{p}_i, \mathbf{f}_i) = \mathcal{G}(\mathbf{q}_i, \mathbf{w}_i)$ where $\mathcal{G}$ is the conversion function from the point cloud and its features to an octree.
|
| 102 |
+
|
| 103 |
+
Octree-based CVAE. To improve the shape reconstruction quality, ZeroGrasp utilizes probabilistic modeling through an octree-based conditional variational autoencoder (CVAE) to address the inherent uncertainty in single-view shape reconstruction, which is crucial for improving both reconstruction and grasp pose prediction quality. Inspired by [63], our Octree-based CVAE consists of an encoder $\mathcal{E}$ , prior $\mathcal{P}$ , and decoder $\mathcal{D}$ to learn latent representations of 3D shapes and grasp poses together as diagonal Gaussian. Concretely, the encoder $\mathcal{E}(\mathbf{z}_i \mid \mathbf{x}_i, \mathbf{y}_i)$ learns to predict the latent code $\mathbf{z}_i$ based on the predicted and ground-truth octrees $\mathbf{x}_i$ and $\mathbf{y}_i$ . The prior $\mathcal{P}(\ell_i, \mathbf{z}_i \mid \mathbf{x}_i)$ takes the octree $\mathbf{x}_i$ as input and computes the latent feature $\ell_i \in \mathbb{R}^{N_i' \times D'}$ and code $\mathbf{z}_i \in \mathbb{R}^{D'}$ where $N_i'$ and $D'$ are the number of points and the dimension of the latent feature. Internally, the latent code is sampled from predicted mean and variance via a reparameterization trick [65]. The decoder $\mathcal{D}(\mathbf{y}_i \mid \ell_i, \mathbf{z}_i, \mathbf{x}_i)$ predicts a 3D reconstruction along with grasp poses. To save computational cost, the decoder predicts occupancy at each depth, discarding grid cells with a probability below 0.5. Only in the final layer does the decoder predict the SDF, normal vectors and grasp poses as well as occupancy. During training, KL divergence between the encoder and prior is minimized such that their distributions are matched.
|
| 104 |
+
|
| 105 |
+
Multi-object encoder. The prior $\mathcal{P}$ computes features per object, lacking the capability of modeling global spatial arrangements for collision-free reconstruction and grasp pose prediction. To address this, we incorporate a transformer in the latent space, composed of $K$ standard Transformer blocks with self-attention and RoPE [66] positional encoding, following in [10]. The multi-object encoder $\mathcal{M}$ takes voxel centers $\mathbf{r}_i\in \mathbb{R}^{N_i'\times 3}$ and its features $\ell_{i}\in \mathbb{R}^{N_{i}^{\prime}\times D^{\prime}}$ of all the objects at the latent space are updated as
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\left[ \ell_ {1} \dots \ell_ {L} \right] \leftarrow \mathcal {M} \left(\left[ \left(\mathbf {r} _ {1}, \ell_ {1}\right) \dots \left(\mathbf {r} _ {L}, \ell_ {L}\right) \right]\right), \tag {4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $L$ represents the total number of objects.
|
| 112 |
+
|
| 113 |
+
3D occlusion fields. Our key insight is that the multi-object encoder primarily learns to avoid collisions between objects and grasp poses in a cluttered scene, as collision modeling requires only local context, making it easier to handle. In contrast, occlusion modeling requires a comprehensive understanding of the global context to accurately capture visibility relationships, since occluders and
|
| 114 |
+
|
| 115 |
+

|
| 116 |
+
ReOcs
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
000000000000000000000000000000000000000000000000000000
|
| 124 |
+
(a) RGB Image
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
(b) Stereo Depth Map
|
| 128 |
+
Figure 4. Example RGB images, stereo depth maps, 3D shapes and grasp poses from the ReOcs and ZeroGrasp-11B datasets. The grasp poses of the ZeroGrasp-11B dataset are subsampled by grasp-NMS [1] for better visibility of the 3D shapes and grasps. More examples are found in the supplementary material.
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
(c) 3D Shapes (+Grasps)
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
Figure 5. Contact-based constraints are used to effectively refine grasp poses. We first obtain contact points $c_{L}$ and $c_{R}$ . Next, the contact distance $D\left(c_{L|R}\right)$ , and the depth is computed by $Z\left(c_{L|R}\right)$ are computed. Finally, the width and height of the grasp is refined based on Eq. (10) and Eq. (11).
|
| 135 |
+
|
| 136 |
+
occludees can be positioned far apart. To mitigate this issue, we design 3D occlusion fields that localize visibility information to voxels via simplified octree-based volume rendering. Concretely, we subdivide a voxel at the latent space into $B^3$ smaller blocks ( $B$ blocks per axis), which are projected into the image space. As shown in Figure 3, if a block lies within the instance mask corresponding to the target object, a self-occlusion flag $o_{\mathrm{self}}$ is set to 1. If the block lies within the instance mask of neighbor objects, interobject occlusion flag $o_{\mathrm{inter}}$ is set to 1. After computing the flags of all the blocks, we construct the 3D occlusion fields $\pmb{\nu}_i \in \mathbb{R}^{N' \times B^3 \times 2}$ by concatenating the two flags of the $i$ -th object. Finally, we encode it by three layers of 3D CNNs that downsample the resolution by a factor of two at each layer to obtain an occlusion feature $\mathbf{o}_i \in \mathbb{R}^{N' \times D''}$ at the latent space, and update the latent feature by $\ell_i \gets [\ell_i \mathbf{o}_i]$ to account for occlusions as well as collisions.
|
| 137 |
+
|
| 138 |
+
Training. Similar to the standard VAEs [63, 65], we train our model by maximizing the evidence lower bound (ELBO). Additionally, we opt for economic supervision [67] to learn grasp pose prediction efficiently. Therefore, the loss function is defined as
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\mathcal {L} _ {\text {r e c}} = \omega_ {\text {o c c}} \sum_ {h} ^ {H} \mathcal {L} _ {\text {o c c}} ^ {h} + \omega_ {\text {n r m}} \mathcal {L} _ {\text {n r m}} + \omega_ {\text {S D F}} \mathcal {L} _ {\text {S D F}}, \tag {5}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
\mathcal {L} _ {\text {g r a s p}} = \omega_ {\mathrm {s}} \mathcal {L} _ {\mathrm {s}} + \omega_ {\mathrm {q}} \mathcal {L} _ {\mathrm {q}} + \omega_ {a} \mathcal {L} _ {\mathrm {a}} + \omega_ {\mathrm {w}} \mathcal {L} _ {\mathrm {w}} + \omega_ {\mathrm {d}} \mathcal {L} _ {\mathrm {d}}, \tag {6}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\mathcal {L} _ {\mathrm {K L}} = \omega_ {\mathrm {K L}} D _ {\mathrm {K L}} \left(\boldsymbol {\mathcal {E}} \left(\mathbf {z} _ {i} \mid \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) \| \boldsymbol {\mathcal {P}} \left(\ell_ {i}, \mathbf {z} _ {i} \mid \mathbf {x} _ {i}\right)\right), \tag {7}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\mathcal {L} = \mathcal {L} _ {\text {r e c}} + \mathcal {L} _ {\text {g r a s p}} + \mathcal {L} _ {\mathrm {K L}}, \tag {8}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
where $\mathcal{L}_{\mathrm{occ}}^h$ computes the mean of the binary cross entropy (BCE) function of occupancy at each depth $h$ , and $\mathcal{L}_{\mathrm{nm}}$ , and $\mathcal{L}_{\mathrm{SDF}}$ represent the averaged L1 distances of surface normal and SDF, respectively, at the final depth of the octree. $\mathcal{L}_{\mathrm{s}}$ , $\mathcal{L}_{\mathrm{q}}$ , $\mathcal{L}_{\mathrm{a}}$ , $\mathcal{L}_{\mathrm{w}}$ , and $\mathcal{L}_{\mathrm{d}}$ computes the averaged L1 distance of graspness of all the possible views, and cross entropy for
|
| 157 |
+
|
| 158 |
+
quality, angle, width, and depth respectively. Finally, the term $\mathcal{L}_{\mathrm{KL}}$ measures the KL divergence between the encoder $\pmb{\mathcal{E}}$ and the prior $\mathcal{P}$ . Each $\omega$ term is a weight parameter to align the scale of different loss terms.
|
| 159 |
+
|
| 160 |
+
# 3.2. Grasp Pose Refinement
|
| 161 |
+
|
| 162 |
+
We find that a strong advantage of 3D reconstruction is its ability to utilize the reconstruction to refine predicted grasp poses. While Ma et al. [3] propose a contact-based optimization algorithm, it requires an accurate truncated signed distance field (TSDF) reconstructed from multi-view images and its runtime is relatively slow. In contrast, we introduce a simple refinement algorithm that applies contact-based constraints and collision detection on the 3D reconstruction. Specifically, we first detect contact points by finding the closest points on the reconstruction to the left and right fingers of the gripper. We then adjust the predicted width and depth so that both fingertips have contact. Finally, we perform collision detection with the reconstruction to discard grasp poses with collisions. In the following, we explain the details of these two refinement processes.
|
| 163 |
+
|
| 164 |
+
Contact-based constraints. Accurate contacts are essential for successful grasping, as they ensure stability and control during manipulation. While our network predicts width and depth of the gripper, we observe that even small errors can result in unstable grasping. To address this issue, we refine a grasp pose by adjusting the fingertip locations of the gripper to align with the nearest contact points of the left and right fingers $\mathbf{c}_{\mathrm{L}}$ and $\mathbf{c}_{\mathrm{R}}$ on the reconstruction. Based on the contact points the width $\mathbf{w}$ is refined as
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\Delta \mathbf {w} = \min \left(D \left(\mathbf {c} _ {\mathrm {L}}\right), D \left(\mathbf {c} _ {\mathrm {R}}\right)\right), \tag {9}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\mathbf {w} \leftarrow \mathbf {w} + 2 \left(\max \left(\gamma_ {\min }, \min \left(\Delta \mathbf {w}, \gamma_ {\max }\right)\right) - \Delta \mathbf {w}\right), \tag {10}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
so that the contact distance $\Delta \mathbf{w}$ remains within the range $\gamma_{\mathrm{min}}$ to $\gamma_{\mathrm{max}}$ . Note that $D(\mathbf{c})$ denotes the contact distance
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+

|
| 179 |
+
RGB-D Image
|
| 180 |
+
|
| 181 |
+

|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
MinkowskiNet
|
| 185 |
+
|
| 186 |
+

|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
OCNN
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
OctMAE
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
Ours
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
Ground-Truth
|
| 205 |
+
Figure 6. Comparisons of 3D reconstruction methods using sparse voxel representations on the ReOcS dataset. Except OctMAE [10], an RGB-D image and predicted instance mask are given as input, and the methods output per-object reconstructions. For OctMAE, we visualize its results with normal maps since it is designed to predict a scene-level reconstruction. For a fair comparison, all the models are trained only on the ZeroGrasp-11B dataset. The red rectangles highlight the regions with major differences.
|
| 206 |
+
|
| 207 |
+
Table 2. Quantitative evaluation of 3D reconstruction on the GraspNet-1B [1], and ReOcS datasets with different difficulties. Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) are reported in the unit of mm. Seg. denotes an output 3D reconstruction is segmented or not.
|
| 208 |
+
|
| 209 |
+
<table><tr><td rowspan="3">Method</td><td rowspan="3">Seg.</td><td rowspan="2" colspan="3">GraspNet-1B [1]</td><td colspan="9">ReOcS (Ours)</td></tr><tr><td colspan="3">Easy</td><td colspan="3">Normal</td><td colspan="3">Hard</td></tr><tr><td>CD↓</td><td>F1↑</td><td>NC↑</td><td>CD↓</td><td>F1↑</td><td>NC↑</td><td>CD↓</td><td>F1↑</td><td>NC↑</td><td>CD↓</td><td>F1↑</td><td>NC↑</td></tr><tr><td>Minkowski [8]</td><td>✓</td><td>6.84</td><td>81.45</td><td>77.89</td><td>5.59</td><td>85.40</td><td>84.74</td><td>6.05</td><td>82.15</td><td>82.68</td><td>9.11</td><td>77.10</td><td>80.86</td></tr><tr><td>OCNN [43]</td><td>✓</td><td>7.23</td><td>82.22</td><td>78.44</td><td>5.26</td><td>85.43</td><td>85.66</td><td>5.96</td><td>82.33</td><td>84.25</td><td>8.69</td><td>77.58</td><td>82.08</td></tr><tr><td>OctMAE [10]</td><td></td><td>7.57</td><td>78.38</td><td>75.19</td><td>5.53</td><td>87.62</td><td>86.90</td><td>5.93</td><td>83.98</td><td>83.45</td><td>6.76</td><td>80.24</td><td>80.58</td></tr><tr><td>Ours</td><td>✓</td><td>6.05</td><td>84.08</td><td>78.46</td><td>4.76</td><td>88.71</td><td>86.74</td><td>5.54</td><td>84.67</td><td>85.13</td><td>6.73</td><td>80.86</td><td>82.95</td></tr></table>
|
| 210 |
+
|
| 211 |
+
from c. We further adjust the depth d by
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\mathbf {d} \leftarrow \max \left(Z \left(\mathbf {c} _ {\mathrm {L}}\right), Z \left(\mathbf {c} _ {\mathrm {R}}\right)\right), \tag {11}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
where $Z(\mathbf{c})$ compute depth of the contact point $\mathbf{c}$ . These simple refinement steps help ensure stable grasps.
|
| 218 |
+
|
| 219 |
+
Collision detection. We implement a simple model-free collision detector using the two-finger gripper, following GS-Net [57]. Although the previous method uses partial point cloud obtained from a depth map, it fails to discard predicted grasp poses that result in collisions with occluded regions. To overcome this limitation, we instead leverage the reconstructed shapes, which allows more precise collision detection. To justify this approach, we perform extensive analysis in our experiments and show the advantages.
|
| 220 |
+
|
| 221 |
+
# 4. Datasets
|
| 222 |
+
|
| 223 |
+
We create two datasets for evaluation and training — 1) the ReOcS dataset is designed to evaluate the quality of 3D reconstruction under varying occlusion levels, and 2) the ZeroGrasp-11B dataset is intended for training baselines and our model for zero-shot robotic grasping. Figure 4 highlights several examples of the datasets.
|
| 224 |
+
|
| 225 |
+
# 4.1. ReOcS Dataset
|
| 226 |
+
|
| 227 |
+
The ReOcS dataset contains 1,125 RGB-D images and ground-truth instance masks, 6D poses, and 3D shapes. To
|
| 228 |
+
|
| 229 |
+
obtain accurate depth maps of metallic and transparent objects, we use a learning-based stereo depth estimation algorithm [71]. There are three splits — easy, normal and hard — based on the extent of occlusions. We use this dataset to compare the robustness of baselines and our method under different occlusion conditions. For the details, please refer to the supplementary material.
|
| 230 |
+
|
| 231 |
+
# 4.2. ZeroGrasp-11B Dataset
|
| 232 |
+
|
| 233 |
+
As shown in Table 1, the ZeroGrasp-11B dataset leverages 12K 3D models and create 1M photorealistic RGB images, ground-truth and stereo depth maps of 25,000 scenes with BlenderProc [72]. In addition, it provides ground-truth 3D reconstructions and 6D object poses. While Grasp-Anything-6D [18] has 6D annotations of a larger number of objects, 3D models are missing, which is crucial for reconstruction. Further, its synthesized images and predicted depth maps have no guarantee that they are physically valid, and grasp pose annotations are sparse and generated from planar grasp poses. We solve these issues with the ZeroGrasp-11B dataset to enable zero-shot robotic grasping. In the following, we describe the procedure of grasp pose generation.
|
| 234 |
+
|
| 235 |
+
Grasp pose generation. Following [6], we begin by randomly sampling $N_{s}$ surface points on ground-truth 3D reconstructions. $N_{s}$ is determined by $N_{s} = \mathcal{A} / \rho$ with
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Recon. Grasp Poses
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Scene 100 (Seen)
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Scene 110 (Seen)
|
| 247 |
+
Figure 7. Qualitative results on grasp pose prediction of ZeroGrasp. Following GSNet [57], we show the best 50 grasp predictions after grasp-NMS [1] from six different scenes (two scenes per split). Red and blue grasps denote high and low grasp quality scores, respectively.
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
Scene 130 (Similar)
|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
Scene 140 (Similar)
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
Scene 160 (Novel)
|
| 263 |
+
|
| 264 |
+

|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
Scene 170 (Novel)
|
| 268 |
+
|
| 269 |
+
Table 3. Quantitative evaluation of grasp pose prediction on the GraspNet-1Billion benchmark. Similar to the other baseline methods, we report the average precision (AP), $\mathrm{AP}_{0.4}$ , and $\mathrm{AP}_{0.8}$ . Note that 0.4 and 0.8 denote the friction coefficients, and the lower the more difficult. G and R in the output column indicate whether the reconstructed posture prediction are predicted or not.
|
| 270 |
+
|
| 271 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Output</td><td colspan="3">Seen</td><td colspan="3">Similar</td><td colspan="3">Novel</td></tr><tr><td>G</td><td>R</td><td>AP</td><td>AP0.8</td><td>AP0.4</td><td>AP</td><td>AP0.8</td><td>AP0.4</td><td>AP</td><td>AP0.8</td><td>AP0.4</td></tr><tr><td>GG-CNN [5]</td><td>✓</td><td></td><td>15.48</td><td>21.84</td><td>10.25</td><td>13.26</td><td>18.37</td><td>4.62</td><td>5.52</td><td>5.93</td><td>1.86</td></tr><tr><td>Chu et al. [68]</td><td>✓</td><td></td><td>15.97</td><td>23.66</td><td>10.80</td><td>15.41</td><td>20.21</td><td>7.06</td><td>7.64</td><td>8.69</td><td>2.52</td></tr><tr><td>CenterGrasp†[59]</td><td>✓</td><td>✓</td><td>16.46</td><td>20.24</td><td>11.74</td><td>9.52</td><td>11.92</td><td>5.71</td><td>1.60</td><td>1.89</td><td>1.12</td></tr><tr><td>GPD [69]</td><td>✓</td><td></td><td>22.87</td><td>28.53</td><td>12.84</td><td>21.33</td><td>27.83</td><td>9.64</td><td>8.24</td><td>8.89</td><td>2.67</td></tr><tr><td>Lian et al. [4]</td><td>✓</td><td></td><td>25.96</td><td>33.01</td><td>15.37</td><td>22.68</td><td>29.15</td><td>10.76</td><td>9.23</td><td>9.89</td><td>2.74</td></tr><tr><td>GraspNet [1]</td><td>✓</td><td></td><td>27.56</td><td>33.43</td><td>16.59</td><td>26.11</td><td>34.18</td><td>14.23</td><td>10.55</td><td>11.25</td><td>3.98</td></tr><tr><td>GSNet [57]</td><td>✓</td><td></td><td>67.12</td><td>78.46</td><td>60.90</td><td>54.81</td><td>66.72</td><td>46.17</td><td>24.31</td><td>30.52</td><td>14.23</td></tr><tr><td>Ma et al. [70]</td><td>✓</td><td></td><td>63.83</td><td>74.25</td><td>58.66</td><td>58.46</td><td>70.05</td><td>51.32</td><td>24.63</td><td>31.05</td><td>12.85</td></tr><tr><td>HGGD</td><td>✓</td><td></td><td>64.45</td><td>72.81</td><td>61.16</td><td>53.59</td><td>64.12</td><td>45.91</td><td>24.59</td><td>30.46</td><td>15.58</td></tr><tr><td>EconomicGrasp [67]</td><td>✓</td><td></td><td>68.21</td><td>79.60</td><td>63.54</td><td>61.19</td><td>73.60</td><td>53.77</td><td>25.48</td><td>31.46</td><td>13.85</td></tr><tr><td>Ours</td><td>✓</td><td>✓</td><td>70.53</td><td>82.28</td><td>64.26</td><td>62.51</td><td>74.26</td><td>54.97</td><td>26.46</td><td>33.13</td><td>15.11</td></tr><tr><td>Ours+FT</td><td></td><td></td><td>72.43</td><td>83.12</td><td>65.57</td><td>65.45</td><td>78.32</td><td>55.48</td><td>28.49</td><td>34.21</td><td>15.80</td></tr></table>
|
| 272 |
+
|
| 273 |
+
Table 4. Abalations on the network input, architecture, and refinement algorithm. For reconstruction and grasp pose prediction, we report the metrics of the hard split of the ReOcS dataset and GraspNet-1B dataset, respectively.
|
| 274 |
+
|
| 275 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Reconstruction</td><td colspan="3">Grasp Pose</td></tr><tr><td>CD↓</td><td>F1↑</td><td>NC↑</td><td>Seen</td><td>Similar</td><td>Novel</td></tr><tr><td>Baseline (OCNN [9])</td><td>8.69</td><td>77.58</td><td>82.08</td><td>41.27</td><td>36.48</td><td>17.46</td></tr><tr><td>No CVAE</td><td>7.67</td><td>78.79</td><td>82.35</td><td>70.23</td><td>60.31</td><td>26.28</td></tr><tr><td>No Multi-Obj. Encoder</td><td>7.09</td><td>79.62</td><td>82.60</td><td>69.52</td><td>61.03</td><td>26.17</td></tr><tr><td>No 3D Occlusion Fields</td><td>7.54</td><td>78.81</td><td>81.94</td><td>67.34</td><td>58.45</td><td>25.00</td></tr><tr><td>No Contact Constraints</td><td>6.73</td><td>80.86</td><td>82.95</td><td>65.67</td><td>55.34</td><td>24.92</td></tr><tr><td rowspan="2">No Collision Detection + Depth Map</td><td>6.73</td><td>80.86</td><td>82.95</td><td>49.35</td><td>44.28</td><td>21.03</td></tr><tr><td>6.73</td><td>80.86</td><td>82.95</td><td>59.93</td><td>51.58</td><td>24.07</td></tr><tr><td>Ours</td><td>6.73</td><td>80.86</td><td>82.95</td><td>70.53</td><td>62.51</td><td>26.46</td></tr></table>
|
| 276 |
+
|
| 277 |
+
$\mathcal{A}$ denoting the surface area and $\rho$ as a density parameter. For each surface point, we synthesize candidate grasps with all combinations of views, orientations around the point's normal vector, and depth respectively, following GraspNet-1B [1]. Next, we conduct collision detection to eliminate any grasps with collision and compute the grasp quality $\mathbf{q}$ for the remaining candidates. The quality metric [55] is computed based on the normal vectors $\mathbf{n}_L$ and $\mathbf{n}_R$ of the contact points $\mathbf{c}_L$ and $\mathbf{c}_R$ by $\mathbf{q} = \min (\mathbf{n}_L\cdot \mathbf{c}_{LR},\mathbf{n}_R\cdot \mathbf{c}_{LR})$ , where $\mathbf{c}_{LR} = (\mathbf{c}_L - \mathbf{c}_R) / (\| \mathbf{c}_L\| \| \mathbf{c}_R\|)$ . Finally, we physically validate the generated grasps with IsaacGym [73]. To make the Objaverse 3D models compatible with simulation, we decompose them into convex hulls using V-HACD [74]. Figure 4 shows the grasp poses before and after the collision and physics-based filtering process.
|
| 278 |
+
|
| 279 |
+
# 5. Experiments
|
| 280 |
+
|
| 281 |
+
Implementation details. Our proposed method, ZeroGrasp, adopts a ResNeXt [75] architecture, pretrained on the ImageNet dataset [76], as an image encoder, and all the parameters except the last layer are fixed during training.
|
| 282 |
+
|
| 283 |
+
Similar to EconomicGrasp [67], we use the predicted view graspness $\mathbf{s}$ to determine a view direction. For training, we use AdamW [77] with a learning rate of 0.001, batch size of 16 on NVIDIA A100. The weights of the loss function is provided in the supplemental. We set the dimensions of the input image feature $D$ , the latent feature $D^{\prime}$ , and the 3D occlusion fields $\nu$ to 32, 192, and 16 respectively. For the 3D occlusion fields, we use 8 for the block resolution $B$ . Following Ma et al., the ranges of the contact distance $\gamma_{\mathrm{min}}$ and $\gamma_{\mathrm{max}}$ are defined to $0.005\mathrm{m}$ and $0.02\mathrm{m}$ , respectively. To generate grasp poses, we use $0.005\mathrm{m}^2$ as the sampling density $\rho$ .
|
| 284 |
+
|
| 285 |
+
Metrics. Similar to OctMAE [10], we use the Chamfer distance (CD), F-1 score, and normal consistency (NC) to evaluate the quality of 3D reconstruction. To evaluate the quality of grasp pose prediction, we use average precision (AP), a standard metric of the GraspNet-1B benchmark, which evaluates average precision based on the top-k ranked grasps in a scene. The $\mathrm{AP}_{\mu}$ metric measures the precision with the friction of $\mu$ by evaluating grasps with friction $\mu$ over m different thresholds. The final AP score is computed as the mean of $\mathrm{AP}_{\mu}$ , using friction values $\mu$ from 0.2 to 1.2
|
| 286 |
+
|
| 287 |
+
at intervals of 0.2.
|
| 288 |
+
|
| 289 |
+
# 5.1. Main Results
|
| 290 |
+
|
| 291 |
+
3D reconstruction. As shown in Table 2, our method outperforms the other single-view reconstruction methods. We choose the three methods using sparse voxel representations due to its superior efficiency and accuracy in a zero-shot setup, reported in Iwase et al. [10]. We train the baseline and our methods on the ZeroGrasp-11B dataset and evaluate them on the GraspNet-1B and ReOcS dataset to test generalization to real-world images. Our qualitative evaluation in Figure 6 demonstrates the robustness of ZeroGrasp to real-world images and inter-object occlusions.
|
| 292 |
+
|
| 293 |
+
Grasp pose prediction. Table 3 demonstrates the comparison against state-of-the-art methods for grasp pose prediction on the RealSense data of the GraspNet-1Billion benchmark. The baselines and our model are trained on the training split of the GraspNet-1Billion dataset for 20 epochs. Notably, our method achieves the state-of-the-art performance across all the AP metrics. In the Ours+FT setup, our model is initially pre-trained on the ZeroGrasp-11B dataset, then fine-tuned on the GraspNet-1Billion dataset for 2 epochs. As a result, fine-tuning improves $1.9\%$ , $2.94\%$ , and $2.03\%$ in the seen, similar and novel splits. This result supports the importance of large-scale grasp pose datasets for zero-shot robotic grasping. Figure 7 shows qualitative results of ZeroGrasp. Unlike the previous methods, ZeroGrasp enables accurate grasp pose prediction even in occluded or truncated regions.
|
| 294 |
+
|
| 295 |
+
# 5.2. Ablations
|
| 296 |
+
|
| 297 |
+
Table 4 shows our ablation studies to validate the effectiveness of each component. We provide detailed analyses from the perspectives of the two tasks addressed in our work.
|
| 298 |
+
|
| 299 |
+
3D reconstruction. We observe a consistent drop in performance across all reconstruction metrics when each of CVAE, the multi-object encoder, and 3D occlusion fields is individually excluded. This highlights the importance of multi-object reasoning to achieve higher reconstruction quality. As shown in Figure 6, our visualizations further demonstrate that these components contribute to better reconstruction, especially in regions with inter-object occlusions and close contacts between objects.
|
| 300 |
+
|
| 301 |
+
Grasp pose prediction. As illustrated in Table 4, most of the components contribute to improved grasp pose detection. In particular, collision detection and contact-based constraints provide a significant boost to grasp pose quality. Our comparison of collision detection using a depth map (partial point clouds) as in GSNet [57] and our predicted reconstruction (59.93 vs 70.53) reveals that reconstruction-based collision detection is more effective. Furthermore, the
|
| 302 |
+
|
| 303 |
+

|
| 304 |
+
|
| 305 |
+

|
| 306 |
+
|
| 307 |
+

|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
Figure 8. Example scenes of our real-robot evaluation.
|
| 311 |
+
|
| 312 |
+

|
| 313 |
+
|
| 314 |
+

|
| 315 |
+
|
| 316 |
+
substantial performance drop without 3D occlusion fields underscores the importance of reasoning about inter-object occlusions.
|
| 317 |
+
|
| 318 |
+
# 5.3. Real-Robot Evaluation
|
| 319 |
+
|
| 320 |
+
We validate the feasibility and generalizability of the baseline (OCNN [9]) and our method, trained only on our synthetic dataset, through real-world evaluations. Our robotic setup uses Franka Emika Panda robot and Robotiq 2F-85 hand. As shown in Figure 8, we set up 5 scenes with 3 to 4 objects. Each object is picked up in repeated trials, with a maximum of 3 attempts per object. Our success rate, measured by the ratio of objects successfully picked up, is $56.25\%$ for the baseline and $75\%$ for our method, highlighting the strong generalization of our approach in real-world scenarios. We describe more details about the robotic setup and show qualitative results in the supplementary material.
|
| 321 |
+
|
| 322 |
+
# 6. Conclusion
|
| 323 |
+
|
| 324 |
+
In this paper, we propose ZeroGrasp, a novel approach for simultaneous 3D reconstruction and grasp pose prediction. By integrating five key components, ZeroGrasp enhances both shape reconstruction and grasp prediction quality. Our extensive analysis confirms the effectiveness of these components. In addition, we strongly believe that ZeroGrasp-11B dataset facilitates future research in zero-shot robotic grasping. Despite its promising generalization capabilities, ZeroGrasp has some limitations. First, our method does not support incremental or multi-view 3D reconstruction [78, 79], which is beneficial when using a wrist-mounted camera on an end effector. Second, it does not account for placement poses that could leverage predicted 3D reconstructions. While this paper focuses on single-view 3D reconstruction and grasp pose prediction, exploring these directions would be valuable.
|
| 325 |
+
|
| 326 |
+
# Acknowledgment
|
| 327 |
+
|
| 328 |
+
We thank Tianyi Ko for help with the real-world robot experiments. This research was supported by Toyota Research Institute.
|
| 329 |
+
|
| 330 |
+
# References
|
| 331 |
+
|
| 332 |
+
[1] H.-S. Fang, C. Wang, M. Gou, and C. Lu, “Graspnet-1billion: A large-scale benchmark for general object grasping,” in CVPR, 2020. 1, 2, 3, 5, 6, 7
|
| 333 |
+
[2] Y. Jiang, S. Moseson, and A. Saxena, "Efficient grasping from rgbd images: Learning using a new rectangle representation," in ICRA, 2011. 3
|
| 334 |
+
[3] M. Haoxiang, S. Modi, B. Gao, and H. Di, "Generalizing 6-dof grasp detection via domain prior knowledge," in CVPR, 2024. 1, 3, 5
|
| 335 |
+
[4] H. Liang, X. Ma, S. Li, M. Gorner, S. Tang, B. Fang, F. Sun, and J. Zhang, “PointNetGPD: Detecting grasp configurations from point sets,” in ICRA, 2019. 7
|
| 336 |
+
[5] D. Morrison, P. Corke, and J. Leitner, “Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach,” in RSS, 2018. 7
|
| 337 |
+
[6] A. Mousavian, C. Eppner, and D. Fox, “6-dof grap-net: Variational grasp generation for object manipulation,” in ICCV, 2019. 1, 3, 6
|
| 338 |
+
[7] W. Shen, G. Yang, A. Yu, J. Wong, L. P. Kaelbling, and P. Isola, “Distilled feature fields enable few-shot language-guided manipulation,” in CoRL, 2023. 1
|
| 339 |
+
[8] C. Choy, J. Gwak, and S. Savarese, “4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks,” in CVPR, 2019. 1, 2, 6
|
| 340 |
+
[9] P.-S. Wang, Y. Liu, Y.-X. Guo, C.-Y. Sun, and X. Tong, “O-CNN: Octree-Based Convolutional Neural Networks for 3D Shape Analysis,” SIGGRAPH, 2017. 2, 3, 4, 7, 8
|
| 341 |
+
[10] L. K. Iwase, Shun and, V. Guizilini, A. Gaidon, K. Kitani, R. Ambrus, and S. Zakharov, “Zero-shot multi-object scene completion,” in ECCV, 2024. 1, 4, 6, 7, 8
|
| 342 |
+
[11] M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt, K. Ehsani, A. Kembhavi, and A. Farhadi, "Objaverse: A Universe of Annotated 3D Objects," CVPR, 2022. 2, 3
|
| 343 |
+
[12] A. Depierre, E. Dellandrea, and L. Chen, "Jacquard: A large scale dataset for robotic grasp detection," IROS, 2018. 3
|
| 344 |
+
[13] H. Zhang, X. Lan, S. Bai, X. Zhou, Z. Tian, and N. Zheng, "Roi-based robotic grasp detection for object overlapping scenes," in IROS, 2019. 3
|
| 345 |
+
[14] X. Yan, J. Hsu, M. Khansari, Y. Bai, A. Pathak, A. Gupta, J. Davidson, and H. Lee, “Learning 6-dof grasping interaction via deep geometry-aware 3d representations,” in ICRA, 2018. 3
|
| 346 |
+
[15] C. Eppner, A. Mousavian, and D. Fox, "ACRONYM: A large-scale grasp dataset based on simulation," in ICRA, 2021. 3
|
| 347 |
+
[16] H. Zhang, D. Yang, H. Wang, B. Zhao, X. Lan, J. Ding, and N. Zheng, "Regrad: A large-scale rela
|
| 348 |
+
|
| 349 |
+
tional grasp dataset for safe and object-specific robotic grasping in clutter," RA-L, 2022. 3
|
| 350 |
+
[17] H. Jung, G. Zhai, S.-C. Wu, P. Ruhkamp, H. Schieber, P. Wang, G. Rizzoli, H. Zhao, S. D. Meier, D. Roth, N. Navab, et al., "Housecat6d-a large-scale multimodal category level 6d object perception dataset with household objects in realistic scenarios," CVPR, 2024. 3
|
| 351 |
+
[18] T. Nguyen, M. N. Vu, B. Huang, A. Vuong, Q. Vuong, N. Le, T. Vo, and A. Nguyen, “Language-driven 6-dof grasp detection using negative prompt guidance,” in ECCV, 2024. 3, 6
|
| 352 |
+
[19] S. F. Bhat, R. Birkl, D. Wofk, P. Wonka, and M. Müller, “Zoeddepth: Zero-shot transfer by combining relative and metric depth,” 2023. 3
|
| 353 |
+
[20] Z. Huang, S. Stojanov, A. Thai, V. Jampani, and J. M. Rehg, “ZeroShape: Regression-based Zero-shot Shape Reconstruction,” CVPR, 2023. 2
|
| 354 |
+
[21] X. Ren, J. Huang, X. Zeng, K. Museth, S. Fidler, and F. Williams, “Xcube: Large-scale 3d generative modeling using sparse voxel hierarchies,” in CVPR, 2024. 2
|
| 355 |
+
[22] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, "Occupancy Networks: Learning 3D Reconstruction in Function Space," in CVPR, 2019.
|
| 356 |
+
[23] S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger, “Convolutional Occupancy Networks,” in ECCV, 2020. 2
|
| 357 |
+
[24] M. Z. Irshad, T. Kollar, M. Laskey, K. Stone, and Z. Kira, “Centersnap: Single-shot multi-object 3d shape reconstruction and categorical 6d pose and size estimation,” 2022.
|
| 358 |
+
[25] M. Z. Irshad, S. Zakharov, R. Ambrus, T. Kollar, Z. Kira, and A. Gaidon, "Shapo: Implicit representations for multi-object shape appearance and pose optimization," 2022.
|
| 359 |
+
[26] A. Bozic, P. Palafox, J. Thies, A. Dai, and M. Nießner, "TransformerFusion: Monocular rgb scene reconstruction using transformers," in NeurIPS, 2021.
|
| 360 |
+
[27] A. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, and M. Nießner, "ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans," in CVPR, 2018.
|
| 361 |
+
[28] H.-X. Chen, J. Huang, T.-J. Mu, and S.-M. Hu, “CIRCLE: Convolutional Implicit Reconstruction And Completion For Large-Scale Indoor Scene,” in ECCV, 2022.
|
| 362 |
+
[29] M. Lunayach, S. Zakharov, D. Chen, R. Ambrus, Z. Kira, and M. Z. Irshad, "Fsd: Fast self-supervised single rgb-d to categorical 3d objects," in Int. Conf. on Robotics and Automation, IEEE, 2024.
|
| 363 |
+
[30] J. Huang, Z. Gojcic, M. Atzmon, O. Litany, S. Fidler,
|
| 364 |
+
|
| 365 |
+
and F. Williams, "Neural Kernel Surface Reconstruction," in CVPR, 2023.
|
| 366 |
+
[31] Y. Li, Z. Yu, C. Choy, C. Xiao, J. M. Alvarez, S. Fidler, C. Feng, and A. Anandkumar, "VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion," in CVPR, 2023. 2
|
| 367 |
+
[32] N. Heppert, M. Z. Irshad, S. Zakharov, K. Liu, R. A. Ambrus, J. Bohg, A. Valada, and T. Kollar, "Carto: Category and joint agnostic reconstruction of articulated objects," in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 21201-21210, 2023.
|
| 368 |
+
[33] C.-Y. Wu, J. Johnson, J. Malik, C. Feichtenhofer, and G. Gkioxari, “Multiview Compressive Coding for 3D Reconstruction,” in CVPR, 2023. 2
|
| 369 |
+
[34] A. Boulch and R. Marlet, “POCO: Point Convolution for Surface Reconstruction,” in CVPR, 2022. 2
|
| 370 |
+
[35] T. Shen, J. Gao, K. Yin, M.-Y. Liu, and S. Fidler, "Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis," in NeurIPS, 2021.
|
| 371 |
+
[36] Z. Liu, Y. Feng, M. J. Black, D. Nowrouzezahrai, L. Paull, and W. Liu, "MeshDiffusion: Score-based Generative 3D Mesh Modeling," in ICLR, 2023.
|
| 372 |
+
[37] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation,” in CVPR, 2019.
|
| 373 |
+
[38] X. Yu, Y. Rao, Z. Wang, Z. Liu, J. Lu, and J. Zhou, “PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers,” in ICCV, 2021. 2
|
| 374 |
+
[39] X. Yan, L. Lin, N. J. Mitra, D. Lischinski, D. Cohen-Or, and H. Huang, “ShapeFormer: Transformer-based Shape Completion via Sparse Representation,” in CVPR, 2022. 2
|
| 375 |
+
[40] P. Mittal, Y.-C. Cheng, M. Singh, and S. Tulsiani, "AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation," in CVPR, 2022.
|
| 376 |
+
[41] Y.-C. Cheng, H.-Y. Lee, S. Tulyakov, A. G. Schwing, and L.-Y. Gui, "SDFusion: Multimodal 3d shape completion, reconstruction, and generation," in CVPR, 2023.
|
| 377 |
+
[42] J. Varley, C. DeChant, A. Richardson, J. Ruales, and P. Allen, "Shape completion enabled robotic grasping," in IROS, 2017. 3
|
| 378 |
+
[43] P.-S. Wang, Y. Liu, and X. Tong, "Deep Octree-based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion," in CVPRW, 2020. 6
|
| 379 |
+
[44] S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, "Semantic Scene Completion from a Single Depth Image," CVPR, 2017.
|
| 380 |
+
[45] D. Zhang, C. Choi, I. Park, and Y. M. Kim, "Probabilistic Implicit Scene Completion," in ICLR, 2022.
|
| 381 |
+
|
| 382 |
+
[46] S. S. Mohammadi, N. F. Duarte, D. Dimou, Y. Wang, M. Taiana, P. Morerio, A. Dehban, P. Moreno, A. Bernardino, A. Del Bue, and J. Santos-Victor, "3DSGrasp: 3D Shape-Completion for Robotic Grasp," in ICRA, 2023.
|
| 383 |
+
[47] P. Zhang, W. Liu, Y. Lei, H. Lu, and X. Yang, "Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion," in ICCV, 2019. 2
|
| 384 |
+
[48] J. Li, K. Han, P. Wang, Y. Liu, and X. Yuan, "Anisotropic Convolutional Networks for 3D Semantic Scene Completion," in CVPR, 2020. 2
|
| 385 |
+
[49] F. Williams, J. Huang, J. Swartz, G. Klar, V. Thakkar, M. Cong, X. Ren, R. Li, C. Fuji-Tsang, S. Fidler, E. Sifakis, and K. Museth, "fvdb: A deep-learning framework for sparse, large-scale, and high-performance spatial intelligence," SIGGRAPH, 2024. 2
|
| 386 |
+
[50] K. Museth, “VDB: High-resolution sparse volumes with dynamic topology,” 2013. 2
|
| 387 |
+
[51] E. R. Chan, K. Nagano, M. A. Chan, A. W. Bergman, J. J. Park, A. Levy, M. Aittala, S. D. Mello, T. Karras, and G. Wetzstein, "GenVNS: Generative novel view synthesis with 3D-aware diffusion models," in CoRR, 2023. 2
|
| 388 |
+
[52] R. Liu, R. Wu, B. V. Hoorick, P. Tokmakov, S. Zakharov, and C. Vondrick, “Zero-1-to-3: Zero-shot One Image to 3D Object,” in CVPR, 2023. 2
|
| 389 |
+
[53] D. Watson, W. Chan, R. Martin-Brualla, J. Ho, A. Tagliasacchi, and M. Norouzi, “Novel View Synthesis with Diffusion Models,” CoRR, 2022. 2
|
| 390 |
+
[54] J. Xu, W. Cheng, Y. Gao, X. Wang, S. Gao, and Y. Shan, "Instantmesh: Efficient 3d mesh generation from a single image with sparse-view large reconstruction models," CoRR, 2024. 2
|
| 391 |
+
[55] V.-D. Nguyen, “Constructing force-closure grasps,” in Proceedings. 1986 IEEE International Conference on Robotics and Automation, 1986. 3, 7
|
| 392 |
+
[56] A. Bicchi and V. Kumar, "Robotic grasping and contact: a review," in ICRA, 2000. 3
|
| 393 |
+
[57] C. Wang, H.-S. Fang, M. Gou, H. Fang, J. Gao, and C. Lu, “Graspness discovery in clutters for fast and accurate grasp detection,” in ICCV, 2021. 3, 6, 7, 8
|
| 394 |
+
[58] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” in RSS, 2017. 3
|
| 395 |
+
[59] E. Chisari, N. Heppert, T. Welschehold, W. Burgard, and A. Valada, “Centergrasp: Object-aware implicit representation learning for simultaneous shape reconstruction and 6-dof grasp estimation,” RA-L, 2024. 3, 7
|
| 396 |
+
|
| 397 |
+
[60] Z. Jiang, Y. Zhu, M. Svetlik, K. Fang, and Y. Zhu, "Synergies between affordance and geometry: 6-dof grasp detection via implicit representations," RSS, 2021. 3
|
| 398 |
+
[61] P. Grady, C. Tang, C. D. Twigg, M. Vo, S. Brahmbhatt, and C. C. Kemp, “ContactOpt: Optimizing contact to improve grasps,” in CVPR, 2021. 3
|
| 399 |
+
[62] D. Morrison, P. Corke, and J. Leitner, “Egad! an evolved grasping analysis dataset for diversity and reproducibility in robotic manipulation,” RA-L, 2020. 3
|
| 400 |
+
[63] D. Rempe, T. Birdal, A. Hertzmann, J. Yang, S. Sridhar, and L. J. Guibas, “Humor: 3d human motion model for robust pose estimation,” in ICCV, 2021. 4, 5
|
| 401 |
+
[64] N. Ravi, V. Gabeur, Y.-T. Hu, R. Hu, C. Ryali, T. Ma, H. Khedr, R. Rädle, C. Rolland, L. Gustafson, E. Mintun, J. Pan, K. V. Alwala, N. Carion, C.-Y. Wu, R. Girshick, P. Dollar, and C. Feichtenhofer, "Sam 2: Segment anything in images and videos," CoRR, 2024. 4
|
| 402 |
+
[65] D. P. Kingma and M. Welling, "Auto-encoding variational bayes," ICLR, 2014. 4, 5
|
| 403 |
+
[66] J. Su, Y. Lu, S. Pan, B. Wen, and Y. Liu, “RoFormer: Enhanced Transformer with Rotary Position Embedding,” in ICLR, 2020. 4
|
| 404 |
+
[67] X.-M. Wu, J.-F. Cai, J.-J. Jiang, D. Zheng, Y.-L. Wei, and W.-S. Zheng, “An economic framework for 6-dof grasp detection,” in ECCV, 2025. 5, 7
|
| 405 |
+
[68] F. Chu, R. Xu, and P. A. Vela, "Real-world multiobject, multigrasp detection," in RA-L, 2018. 7
|
| 406 |
+
[69] A. ten Pas, M. Gualtieri, K. Saenko, and R. W. Platt, "Grasp pose detection in point clouds," The International Journal of Robotics Research (IJRR), 2017. 7
|
| 407 |
+
[70] M. Haoxiang and D. Huang, “Towards scale balanced 6-dof grasp detection in cluttered scenes,” in CoRL, 2022. 7
|
| 408 |
+
[71] K. Shankar, M. Tjersland, J. Ma, K. Stone, and M. Bajracharya, “A learned stereo depth system for robotic manipulation in homes,” RA-L, 2021. 6
|
| 409 |
+
[72] M. Denninger, D. Winkelbauer, M. Sundermeyer, W. Boerdijk, M. Knauer, K. H. Strobl, M. Hunt, and R. Triebel, "BlenderProc2: A Procedural Pipeline for Photorealistic Rendering," Journal of Open Source Software, 2023. 6
|
| 410 |
+
[73] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State, "Isaac gym: High performancegpu-based physics simulation for robot learning," CoRR, 2021. 7
|
| 411 |
+
[74] K. Mamou, “V-hacd: Volumetric hierarchical approximate convex decomposition.” https://github.com/kmammou/v-hacd, 2016.7
|
| 412 |
+
|
| 413 |
+
[75] S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, "Aggregated Residual Transformations for Deep Neural Networks," CVPR, 2017. 7
|
| 414 |
+
[76] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in CVPR, 2009. 7
|
| 415 |
+
[77] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," in ICLR, 2019. 7
|
| 416 |
+
[78] R. Mur-Artal and J. D. Tardós, “ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras,” IEEE Transactions on Robotics, 2017. 8
|
| 417 |
+
[79] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in ECCV, 2020. 8
|
CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aca4f8fa2c9182116e4d0b45c8c0994d7af8096e1073950dddeedbab13588680
|
| 3 |
+
size 670543
|
CVPR/2025/ZeroGrasp_ Zero-Shot Shape Reconstruction Enabled Robotic Grasping/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ce01fa4bd412a845a7876366806c3197bab3871498e779a9e645d5fe2d3d056
|
| 3 |
+
size 546263
|
CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/07d97fcd-beb6-4db8-9d15-727f79d06230_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da22e8d4be3c22217e804f4140cde67358ef276f38c7b876af0fde5533024661
|
| 3 |
+
size 82207
|
CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/07d97fcd-beb6-4db8-9d15-727f79d06230_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c9159183691c8f904518f780452294593d7071555c639abf8046607c72927aab
|
| 3 |
+
size 107283
|
CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/07d97fcd-beb6-4db8-9d15-727f79d06230_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f6229722b82d716bb2e8e0d91859c4d5f1eb03721131eedd6ad862a38a7fd77f
|
| 3 |
+
size 1845423
|
CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/full.md
ADDED
|
@@ -0,0 +1,295 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ZeroVO: Visual Odometry with Minimal Assumptions
|
| 2 |
+
|
| 3 |
+
Lei Lai* Zekai Yin* Eshed Ohn-Bar Boston University
|
| 4 |
+
|
| 5 |
+
{leilai, zekaiyin, eohnbar}@bu.edu
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
We introduce ZeroVO, a novel visual odometry (VO) algorithm that achieves zero-shot generalization across diverse cameras and environments, overcoming limitations in existing methods that depend on predefined or static camera calibration setups. Our approach incorporates three main innovations. First, we design a calibration-free, geometry-aware network structure capable of handling noise in estimated depth and camera parameters. Second, we introduce a language-based prior that infuses semantic information to enhance robust feature extraction and generalization to previously unseen domains. Third, we develop a flexible, semi-supervised training paradigm that iteratively adapts to new scenes using unlabeled data, further boosting the models' ability to generalize across diverse real-world scenarios. We analyze complex autonomous driving contexts, demonstrating over $30\%$ improvement against prior methods on three standard benchmarks—KITTI, nuScenes, and Argoverse 2—as well as a newly introduced, high-fidelity synthetic dataset derived from Grand Theft Auto (GTA). By not requiring fine-tuning or camera calibration, our work broadens the applicability of VO, providing a versatile solution for real-world deployment at scale.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
For a robot or autonomous vehicle to function reliably in the real world, a generalized Visual Odometry (VO) system is essential—one that can robustly estimate the relative camera pose in metric coordinates from a sequence of images under diverse and unforeseen conditions. However, generalization remains a significant challenge for current VO models, which often suffer from lost feature tracks, optimization instability, and drift, particularly when exposed to varying lighting, dynamic scenes, or adverse weather conditions [1, 11, 31, 35, 54, 69].
|
| 14 |
+
|
| 15 |
+
Due to the inherent difficulty and ambiguity in modeling
|
| 16 |
+
|
| 17 |
+
camera ego-motion, a dynamic 3D world, and real-world scale from 2D images, monocular VO algorithms have traditionally been built on strong assumptions and geometric constraints [3, 7, 10, 13, 16, 19, 20, 51, 52, 57, 84]. While carefully designed camera calibration or evaluation on fixed data distributions can be effective in controlled settings, such approaches can limit adaptability and scalability to real-world scenarios with varying configurations that may not align with such assumptions.
|
| 18 |
+
|
| 19 |
+
VO techniques have increasingly adopted learning-based components to exploit statistical regularities in scene structure and motion dynamics. However, most learning-based methods rely on privileged ground-truth data (e.g., accurate camera parameters, optical flow) for supervision and often train and evaluate on the same dataset [7, 20, 32, 33, 57, 62, 63, 68, 77]. Although recent studies explore generalization beyond single-dataset settings [37, 42, 62, 63, 70], current models continue to exhibit significant errors in the presence of more complex everyday contexts [1, 11, 31, 35, 54], including harsh conditions such as rainy or snowy nights (e.g., frequent glare, water streaks, reflections, and reduced visibility), lens degradation (e.g., condensation, scratches, dirt), or highly dynamic environments (e.g., dense intersections or aggressive motion). How can we design VO models that generalize across conditions instead of quickly suffering from instability and drift?
|
| 20 |
+
|
| 21 |
+
In this work, we aim to advance the capabilities of learning-based monocular VO. We introduce ZeroVO, a novel transformer-based approach for robustly predicting relative camera motion at real-world scale across variable scenes in a zero-shot manner. By leveraging cross-attention mechanisms [18, 65] to efficiently integrate contextual and geometric priors directly into the network architecture, ZeroVO avoids common limiting assumptions—such as reliance on camera calibration or costly optimization steps. Specifically, we fuse versatile multimodal text [43, 45, 55] and depth-based priors [23, 27, 53, 80] to address inherent scale ambiguity in metric VO. We demonstrate that our proposed model is robust to noisy and uncalibrated setups. We further optimize the model using a novel multimodal semi-supervised training framework that filters noisy
|
| 22 |
+
|
| 23 |
+
pseudo-labels in a geometry and language-guided process. Our flexible VO framework achieves state-of-the-art, off-the-shelf performance across diverse autonomous driving datasets. To comprehensively assess system generalizability, we also collect and analyze a novel Grand Theft Auto (GTA) dataset featuring challenging scenarios with harsh weather, high-speed motion, complex traffic scenes, and varied camera settings. Our dataset and code are available at https://zvocvpr.github.io/.
|
| 24 |
+
|
| 25 |
+
# 2. Related Work
|
| 26 |
+
|
| 27 |
+
Our framework builds on advances in foundational computer vision models, particularly in metric depth prediction and rich, generalized vision-and-language embeddings.
|
| 28 |
+
|
| 29 |
+
Learning-Based Monocular Visual Odometry: Learning-based monocular visual odometry tasks can be roughly categorized into two main approaches: neural network models combined with multi-step geometric optimization (e.g., full SLAM [7, 42, 49, 62, 63, 95]) or direct, end-to-end relative pose estimation from two or few consecutive frames [37, 66, 70, 79]. Hybrid methods such as Droid-SLAM [62] have demonstrated strong performance in dense scene reconstruction and pose estimation. In contrast, two-frame pose regression tends to be more robust in short-distance tracking scenarios, while SLAM and other geometry-based approaches typically require continuous, long-frame sequences. These methods often rely on long-term feature matching and global optimization techniques, such as loop closure detection. Although certain methods [30] can aid in initialization, SLAM remains sensitive to environmental features and accurate motion tracking, i.e., can fail to build and update a reliable map in feature-deficient environments (e.g., corridors or repetitive textures) or highly dynamic settings (e.g., crowds). In contrast, two-frame pose regression is less affected by such conditions as it does not rely on maintaining a global representation. However, two-frame pose regression can be prone to drift accumulation, as it lacks the temporal optimization over extended frame sequences needed to correct for drift. Our work improves over two-frame approaches due to inherent efficiency, versatility (i.e., as input to downstream optimization), and minimal assumptions.
|
| 30 |
+
|
| 31 |
+
Metric Depth Estimation from Images: We leverage advances in metric depth estimation to address the inherent ambiguity in recovering camera translation at real-world scale. Traditional monocular depth models often rely on scale-invariant losses or sparse supervision, making them unsuitable for tasks such as visual odometry that require consistent metric scale. Recently, models for predicting metric depth have demonstrated practical performance [27, 53, 76, 82]. Models such as Depth Anything [76] and UniDepth [53] aim to generalize depth pre
|
| 32 |
+
|
| 33 |
+
diction across a wide range of scenes by leveraging large-scale vision foundation models. WordDepth [82] proposes the use of language-guided priors to reduce ambiguity in unconstrained prediction of scale. Metric3Dv2 [27] provides a zero-shot model that was trained across numerous datasets and is capable of predicting real-world scale depth (and surface normals) in diverse settings. By leveraging known camera intrinsics and extrinsics, the model learns to transform inputs into a canonical camera space. While existing models often struggle in challenging real-world scenarios, we adopt Metric3Dv2 to extract real-scale depth features that enable accurate and robust visual odometry. To further increase the flexibility and applicability of our approach, we do not rely on traditional camera calibration or predefined image information [85, 88-90]. Instead, we consider settings where calibration may be unavailable or inaccurate, and incorporate single-image camera parameter estimation techniques such as WildCamera [94] to support inference under uncalibrated conditions.
|
| 34 |
+
|
| 35 |
+
Rich Vision-and-Language Embeddings: Language-guided models have shown strong generalization capabilities by effectively bridging multiple modalities. Through joint embedding spaces that capture generalized semantic relationships between images and language, Vision-Language Large Models (VLLMs) models have recently achieved state-of-the-art results in diverse tasks such as image captioning [15, 74, 81], visual question answering [2], and cross-modal retrieval [26]. LLaVA [45], for instance, is now being broadly used across contexts and tasks [12, 43, 46, 87]. Preliminary studies in autonomous driving, e.g., Tian et al. [64], have shown VLLMs to be useful for robustness under long-tail events. In our work, we propose to integrate VLLMs to extract high-level semantic descriptions of driving scenes that could serve as language-based priors that guide metric-scale odometry and complement adaptive inference under challenging visual conditions.
|
| 36 |
+
|
| 37 |
+
Semi-Supervised Learning: Our work aims to develop flexible models that can effectively adapt to new environments, including through the use of unlabeled data. Semi-supervised learning (SSL) is being increasingly used in computer vision and machine learning tasks, particularly in domains where annotated data is scarce, costly, or requires expert supervision [4, 5, 9, 14, 21, 24, 25, 28, 34, 38, 58, 61, 93]. In the context of visual odometry, SSL can potentially enable the use of large-scale, unlabeled video data, such as web videos [37, 86], to expand the diversity of training scenarios and further improve generalization. However, SSL also presents challenges, including noisy pseudo-labels and the risk of propagating errors through repetitive training cycles, which we address in our work through multimodal pseudo-label selection mechanisms.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
Figure 1. Multimodal and Geometry-Guided Network Overview. Given a pair of input images, our model computes a rich multimodal embedding through a transformer-based fusion module. The embedding is then passed to a two-branch decoder MLP that outputs real-world translation and rotation. Our architecture (Sec. 3.1) leverages cross-attention to fuse complementary cues, including flow, depth, camera intrinsics, and language-based features in a geometry-aware manner. The language prior is first used to refine both the depth map and 2D flow estimates. The refined depth is then unprojected into 3D (using estimated parameters) to compute scene flow, which is further enhanced and fused with additional features before decoding. By embedding geometric reasoning and multimodal priors directly into the network structure, our model achieves strong zero-shot generalization across diverse and challenging settings.
|
| 41 |
+
|
| 42 |
+
# 3. Method
|
| 43 |
+
|
| 44 |
+
Our method (Fig. 1) facilitates generalization via minimal and versatile image-based priors, integrated throughout our model structure. In this section, we first formalize our generalized, calibration-free monocular VO task. We then detail the proposed transformer-based geometry and prior-guided network structure in Sec. 3.1 and the semi-supervised training process in Sec. 3.2.
|
| 45 |
+
|
| 46 |
+
Monocular VO with Minimal Assumptions: In its most general form, monocular VO assumes two consecutive RGB frames $\mathcal{I} = \{\mathbf{I}_{i - 1},\mathbf{I}_i\}$ , $\mathbf{I}\in \mathbb{R}^{W\times H\times 3}$ and learns to predict a real-world relative pose between the two camera views $\mathbf{T}_i = [\mathbf{R}_i|\mathbf{t}_i]$ , where $\mathbf{R}_i\in \mathrm{SO}(3)$ , $\mathbf{t}_i\in \mathbb{R}^3$ are the relative rotation and translation, respectively. We focus on the efficient two-frame setup as it enables a fair comparison to other baselines methods (e.g., TartanVO [70]) while quantifying real-time sequential drift, i.e., prior to any additional global optimization steps, such as loop closure and bundle adjustment [51, 60, 62]. In Sec. 4, we find ZeroVO to outperform more complex methods that leverage computationally expensive, multi-frame refinement steps. We emphasize that monocular VO methods generally evaluate under up-to-scale settings [51, 63, 70], as estimating a metric-scaled transform from image pairs can be difficult, while reducing the solution space through known camera pose $\mathbf{T}_i^{cam}$ and intrinsics, including the camera's focal length
|
| 47 |
+
|
| 48 |
+
and center, $\{f_U,f_V,c_U,c_V\}$ (these are used in the camera intrinsic matrix, denoted as $\mathbf{K}_i\in \mathbb{R}^{3\times 3}$ ). However, in our formulation, we do not assume any prior knowledge of camera parameters, as it can be limiting and require recalibration in cases of lens issues or different camera setups. Instead, to guide learning and inference, we rely on a set of versatile image-based priors built into the network structure. Specifically, we extract a rich set of modalities, including estimated flow $\hat{\mathbf{F}}_i\in \mathbb{R}^{W\times H\times 2}$ , depth map $\hat{\mathbf{D}}_i\in \mathbb{R}^{W\times H}$ , camera parameters $\hat{\mathbf{K}}_i$ , and rich language-based context features $\mathbf{Z}_i^l\in \mathbb{R}^{W_l\times H_l}$ that provide complementary cues regarding scene semantics, layout characteristics, and scale. Our network structure fuses the estimated cues in a geometrically-guided process, discussed next.
|
| 49 |
+
|
| 50 |
+
# 3.1. Geometry and Prior-Guided Network
|
| 51 |
+
|
| 52 |
+
Our network structure comprises three key components: (1) an encoding module, which estimates camera intrinsic parameters and extracts a rich, multimodal set of cues; (2) a text-conditional, geometry-guided transformer module that leverages general structural priors to unproject data into 3D space and fuse the different modalities; and (3) a decoding module for probabilistically predicting ego-motion.
|
| 53 |
+
|
| 54 |
+
Intrinsic Parameters Estimation: VO methods generally rely on accurate knowledge of camera extrinsic and intrinsic parameters while training and testing on datasets with fixed
|
| 55 |
+
|
| 56 |
+
camera settings. To enable more generalized VO, we do not rely on such restrictive assumptions. We instead propose to estimate the camera intrinsic parameters leveraging recent advances in in-the-wild, single-image intrinsic parameter estimation [27, 94] (primarily relying on 3D monocular priors). We leverage an off-the-shelf solution [94], as we do not require the estimation to be completely accurate. The intrinsic matrix will also be used to inform the geometry-aware transformer and semi-supervised network training (Sec. 3.2). To align with image-level cues and enable the network to recover from noisy estimates, the intrinsic parameters are encoded into an image-sized array,
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\mathbf {I} ^ {\hat {\mathbf {K}}} (u, v) = \frac {| u - c _ {U} |}{f _ {U}} + \frac {| v - c _ {V} |}{f _ {V}} \tag {1}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
where the intrinsic information is explicitly preserved within each intrinsic map [70]. Encoding parameter information into an image map provides an efficient approach for our transformer module to reason over noisy geometric information, as will be discussed below. We note that $\mathbf{I}^{\hat{\mathbf{K}}}$ uniquely represents a specific camera configuration.
|
| 63 |
+
|
| 64 |
+
Extracting Multimodal Image Cues: To holistically represent general scene priors, scene dynamics, and camera motion and geometry, we employ a rich and complementary set of image-based features. As in standard VO methods, we extract optical flow [70] from the image pair using a MaskFlownet [91] encoder (We extract the optical flow $\hat{\mathbf{F}}$ as well as a correlation feature $\hat{\mathbf{F}}^c$ , which represents 2D correspondences between the images, from the intermediate layer of MaskFlownet). To estimate a metric-scale depth map $\hat{\mathbf{D}}$ , we utilize the estimated camera intrinsic parameters with Metric3Dv2 [27]. Finally, although camera information and metric depth can aid in understanding camera projection and motion, estimating these from a single image can be noisy and ill-posed. Thus, in addition to depth-based cues, we propose to leverage complementary text-based cues that can reduce ambiguity by capturing high-level scene semantics and layout characteristics. Specifically, we leverage LLaVA-NeXT [44] to extract rich image descriptions which are encoded using Sentence Transformers [55]. In addition to providing useful context in arbitrary scenes during inference, we leverage the language-based cues to filter noisy pseudo-labels in Sec. 3.2. We fuse modalities in a geometry-guided process, described next.
|
| 65 |
+
|
| 66 |
+
Unprojection to Pseudo-3D: The estimated depth map can be unprojected into a 3D point cloud $\mathbf{P} \in \mathbb{R}^{W \times H \times 3}$ using the estimated camera matrix [71], i.e., by computing 3D world coordinates $\mathbf{p} = d\hat{\mathbf{K}}^{-1}\mathbf{u}$ , where $\mathbf{u} = (u, v)$ is a pixel in homogeneous coordinate and $d = \hat{\mathbf{D}}(\mathbf{u})$ . We stack and normalize the resulting unprojection into a 3D array $\hat{\mathbf{D}}^{\mathrm{3D}}$ . We unproject the 2D optical flow into 3D to obtain a scene flow $\hat{\mathbf{F}}^{\mathrm{3D}}$ matrix (additional details regarding
|
| 67 |
+
|
| 68 |
+
this step can be found in our supplementary). While these steps integrate physically-coherent camera and 3D information into a consistent representation, we expect the 3D maps to be noisy, particularly in our challenging generalization and adverse settings. Hence, instead of being explicit constraints, the 3D maps are integrated as minimal structures into a transformer-based module.
|
| 69 |
+
|
| 70 |
+
Language and Geometry-Guided Transformer: We employ transformer [18, 65] to fuse the multimodal priors while reasoning over structure and noisy pseudo-3D information. We process the estimated flow and depth maps to compute two types of language-conditioned descriptors, a depth-based feature $\mathbf{Z}^D$ ,
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
\mathbf {Z} = \mathbf {C A} (\mathrm {P E} ([ \hat {\mathbf {D}}, \mathbf {I} ^ {\hat {\mathbf {K}}} ]), \mathbf {Z} ^ {l}) \tag {2}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\mathbf {Z} ^ {\mathrm {D}} = \operatorname {C A} \left(\operatorname {P E} \left(\hat {\mathbf {D}} ^ {\mathrm {3 D}}\right), \mathbf {Z}\right) \tag {3}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
and a flow-based feature $\mathbf{Z}^F$ computed in a similar manner,
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\mathbf {Z} = \operatorname {C A} \left(\operatorname {P E} \left(\hat {\mathbf {F}} ^ {c}\right), \mathbf {Z} ^ {l}\right) \tag {4}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\mathbf {Z} ^ {\mathbf {F}} = \operatorname {C A} \left(\operatorname {P E} \left(\hat {\mathbf {F}} ^ {\mathrm {3 D}}\right), \mathbf {Z}\right) \tag {5}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
where $\mathrm{CA}(\mathbf{Q},\mathbf{KV})$ denotes Cross-Attention, with query $Q$ and key-value pair KV, and PE denotes a patch and positional embedding [18]. We note that we concatenate features with the intrinsic image to enable the model to learn coherence under noise, as accurate 3D reasoning is influenced by the focal length [27].
|
| 91 |
+
|
| 92 |
+
Probabilistic Ego-Motion Decoder: The refined and aligned features, $\mathbf{Z}^{\mathrm{F}}$ and $\mathbf{Z}^{\mathrm{D}}$ , are concatenated and decoded into ego-motion. Our decoder consists of two MLP output branches, one predicting translation and the other rotation. For translation, we leverage metric-scale regression [70]. For rotation estimation, we fit a probabilistic distribution, specifically a matrix Fisher distribution (following [37, 48, 50]) to model the rotation distribution in SO(3).
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
p (\mathbf {R} | \boldsymbol {\Psi}) = \frac {1}{c (\boldsymbol {\Psi})} \exp (t r (\boldsymbol {\Psi} ^ {\top} \mathbf {R})) \qquad (6)
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
where $\mathbf{R} \in \mathrm{SO}(3)$ is the rotation matrix, $\Psi \in \mathbb{R}^{3 \times 3}$ are the parameters of matrix Fisher distribution, and $c(\Psi)$ is a normalization constant [48].
|
| 99 |
+
|
| 100 |
+
# 3.2. Model Training via Semi-Supervision
|
| 101 |
+
|
| 102 |
+
Due to the minimal assumptions employed by our calibration-free VO framework, the model can be effectively trained over in-the-wild, large-scale video collections. Hence, we consider both the standard supervised and a proposed semi-supervised training setup, detailed in this section. We employ the rich priors extracted from Sec. 3.1 in the semi-supervised training to filter noisy pseudo-labeled samples.
|
| 103 |
+
|
| 104 |
+
Supervised Training: Our model can be trained for a standard VO task, without requiring privileged information, e.g., ground-truth camera parameters, flow, or depth. We optimize the multi-head decoder MLP using Mean Squared Error (MSE) loss over predicted translation $\hat{\mathbf{t}}$ and negative log-likelihood of rotation $\mathbf{R}$ over the predicted distribution parameters $\hat{\Psi}$ ,
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathcal {L} = \| \mathbf {t} - \hat {\mathbf {t}} \| _ {2} ^ {2} - \log (p (\mathbf {R} | \hat {\boldsymbol {\Psi}})) \tag {7}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
While our supervised model already achieves strong performance, we further explore incorporating an additional training stage using pseudo-labeled samples generated by running the first-stage model on unlabeled data.
|
| 111 |
+
|
| 112 |
+
Generalization with Semi-Supervised Training: Our goal is to learn effective representations for generalized VO at scale. We thus investigate leveraging semi-supervised training to continue and update the model from unlabeled data. This training involves two stages, first with a supervised (i.e., teacher) model trained using the aforementioned objective function on an annotated dataset. Next, we sample pseudo-labels from the model [9, 39, 56] over a large unconstrained dataset collected from YouTube [75], and re-train the model over the mixed annotated and pseudolabeled dataset. Thus, the semi-supervised setup enables us to investigate the robustness and flexibility of our model in learning from diverse and challenging data with noisy supervision. While semi-supervised training has become a standard evaluation setup in computer vision [29, 40, 59, 67, 73, 78], as in Sec. 3.1 we explore the benefits of prior-informed mechanisms that can facilitate learning at scale from noisy examples.
|
| 113 |
+
|
| 114 |
+
Geometry-Guided Pseudo-Label Selection: To robustly learn from potentially noisy pseudo-labels, we employ a geometrical consistency error obtained based on estimated quantities. Specifically, motivated by prior work in unsupervised VO using known camera parameters [41, 47, 47, 83, 92], we warp a frame to the next frame with the estimated intrinsic matrix and ego-motion, $\mathbf{u}_i = \hat{\mathbf{K}}_i(d\hat{\mathbf{R}}_i\hat{\mathbf{K}}_{i - 1}^{-1}\mathbf{u}_{i - 1} + \hat{\mathbf{t}}_i)$ . We then employ a Structural Similarity Index Measure (SSIM) error [6] to quantify the similarity between an observed image $\mathbf{I}_{i + 1}$ and $\hat{\mathbf{I}}_{i + 1}$ . To ensure that we capture diverse patterns of reconstruction challenges, we further normalize by the two-frame SSIM, i.e.,
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
\operatorname {n o r m S S I M} = \frac {\operatorname {S S I M} \left(\hat {\mathbf {I}} _ {i + 1} , \mathbf {I} _ {i + 1}\right)}{\operatorname {S S I M} \left(\mathbf {I} _ {i} , \mathbf {I} _ {i + 1}\right)} \tag {8}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
and exclude samples based on a fixed NormSSIM threshold. We note that SSIM assesses similarity by evaluating structural information, luminance, and contrast, thereby offering a perception-oriented measure of similarity in contrast to traditional measures based on pixel-wise errors.
|
| 121 |
+
|
| 122 |
+
Language-Guided Pseudo-Label Selection: In addition to the geometry-based consistency pseudo-label check, we leverage our language-based module to filter redundant examples while maintaining an informative and diverse pseudo-labeled dataset. Although distinct text descriptions may not necessarily correspond to distinct pose transformations, we observe that two images characterized by nearly identical text descriptions are likely to be close in the visual space as well. To address sentence sequence variations within a paragraph, rather than serializing all text features into a single vector, we interpret the language feature as a subspace in a higher dimension. We leverage a subspace-based similarity over a short time window $H$ , and compute the text feature similarity between the first image $\mathbf{I}_i$ and the last image $\mathbf{I}_{i + H}$ in the time window [36]. Specifically, we compute similarity as:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\operatorname {s u b s p a c e - s i m} = \sin (\operatorname {a r c c o s} (\operatorname {t r a c e} (\Lambda))) ^ {2} \tag {9}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $\Lambda$ is the eigenvalues matrix obtained via Singular Value Decomposition over $Q_{i}^{\mathsf{T}}Q_{i + H}$ , the orthonormal matrices from the QR decompositions of text features $\mathbf{Z}_i^l$ and $\mathbf{Z}_{i + H}^{l}$ . As in the geometric consistency selection, we remove sequences with low informativeness (i.e., high subspace-sim). The selection mechanism can thus help stabilize learning under the noisy and diverse pseudo-labels.
|
| 129 |
+
|
| 130 |
+
# 3.3. Implementation Details
|
| 131 |
+
|
| 132 |
+
In our implementation, we leverage the pre-trained Wild-Camera [94] model to estimate camera intrinsics. We utilize the MaskFlowNet encoder [91] and Metric3Dv2 [27], as flow and depth backbones, respectively. Sentence Transformers [55] is used to extract a $15 \times 768$ language-based feature matrix. For semi-supervised training, we follow prior work and collect a large-scale, unconstrained web video dataset for additional training [37, 75]. In our analysis, we present three model variants: ZeroVO, ZeroVO+, and LiteZeroVO+. ZeroVO serves as the default model in our experiments, while ZeroVO+ is further trained on the web video dataset with the proposed multimodal pseudolabel selection mechanism. LiteZeroVO+ shows a resource-constrained variant that omits the language-conditioned input modules by replacing the cross-attention module (for conditioning on the language cues and refining the estimated flow and depth maps) with self-attention. The training protocol remains consistent with that of the standard ZeroVO+. We train our network architecture using NVIDIA RTX 4090 GPU with a batch size of 16. ZeroVO+ achieves an inference speed of approximately 0.6 FPS, primarily constrained by the slower Lava-Next module (0.7 FPS), and LiteZeroVO+ obtains an inference speed of 5 FPS. Complete implementation and training details can be found in our supplementary.
|
| 133 |
+
|
| 134 |
+
# 4. Experiments
|
| 135 |
+
|
| 136 |
+
# 4.1. Experimental Setup
|
| 137 |
+
|
| 138 |
+
Real-World Datasets: To study the generalization ability of our model, we conduct experiments using five datasets including three widely adopted datasets for autonomous driving: nuScenes [8], KITTI [22], and Argoverse 2 [72], as well as an introduced Grand Theft Auto V (GTA) simulated dataset with challenging environmental and lens conditions. nuScenes covers four distinct regions across Boston and Singapore: Boston-Seaport, Singapore-OneNorth, Singapore-Queenstown, and Singapore-Holland Village. It encompasses various challenging conditions, such as heavy traffic, nighttime driving, and scenarios involving strong light reflections, making nuScenes particularly valuable for assessing the robustness of models under diverse and complex real-world conditions. In our evaluation, we train on a subset of nuScenes, and test on other benchmarks in a zero-shot manner. KITTI is the most widely evaluated dataset in the VO task. Specifically, the camera intrinsics in KITTI differ significantly from those of the other three benchmarks, making it an important dataset for evaluating a model's ability to adapt to varying camera configurations. Argoverse 2 collects data from six distinct U.S. cities and encompasses a wide range of weather conditions and driving scenarios. Notably, the dataset includes grayscale images captured by the stereo front camera, which provides another generalization stress-test for the model. We also follow Lai et al. [37] and leverage online driving videos from YouTube, encompassing footage across multiple cities, including urban areas, villages, national parks, mountainous regions, and coastal areas, under a wide range of weather conditions. This dataset enables us to study the benefits of diverse unlabeled data while providing an ideal environment for the model to self-learn numerous variations induced by camera motions.
|
| 139 |
+
|
| 140 |
+
GTA Dataset: Besides the three public datasets, we introduce a newly generated simulated dataset derived from the high-fidelity, GTA simulation. Our GTA dataset consists of 922 driving sequences captured within a simulated city environment, encompassing a range of diverse weather conditions, driving speeds (particularly high-speed maneuvers not found in other public datasets), traffic scenarios, and times of day. Compared to other commonly used open-source simulation platforms such as CARLA [17], GTA offers several key advantages: (1) enhanced image realism through the application of the reshape graphic settings that support higher quality rendering, and (2) a wider variety of road conditions across various weather scenarios. For on-road driving, these conditions include significant uphill and downhill gradients, tunnels, and underground parking facilities; for off-road driving, the environment features moun
|
| 141 |
+
|
| 142 |
+
tains, deserts, snow-covered terrains, and forests, thereby enabling more precise and complex rotational dynamics throughout the map.
|
| 143 |
+
|
| 144 |
+
Experimental Setting: Similar to XVO [37], our framework is trained on data from a single city in the nuScenes dataset. Unlike XVO, we observed that Boston-Seaport, Singapore-Queenstown, and Singapore-Holland Village contain the majority of challenging conditions, such as rain, nighttime driving, light reflections, and heavy traffic. Therefore, we use Singapore-OneNorth as our supervised training dataset and the remaining regions, KITTI, Argovere 2, and GTA, as test datasets. It is important to note the main evaluation is done on datasets that were unseen by our model during training and without assumed camera parameters.
|
| 145 |
+
|
| 146 |
+
Baselines: We compared the four most related baselines that demonstrate generalization across datasets without requiring additional fine-tuning: TartanVO [70], XVO [37], DPVO [63], and Metric3D+Droid-SLAM (M+DS) [27, 62]. TartanVO employs effective random cropping and resizing techniques to simulate diverse camera configurations, thereby enhancing the generalization of rotation estimation across unseen datasets. XVO leverages a multi-modality architecture to implicitly extract richer spatial features and integrates self-training to achieve robust generalization performance in both rotation estimation and real-world scale recovery. DPVO employs a recurrent update operator for patch-based correspondence, complemented by differentiable bundle adjustment, demonstrating strong zero-shot performance in rotation estimation. M+DS utilizes the generalization capabilities of Metric3D v2 and Droid-SLAM to accurately estimate metric depth and rotation, effectively recovering the motion trajectory at a real-world scale. Our main baseline is M+DS which achieves state-of-the-art generalization results across dataset.
|
| 147 |
+
|
| 148 |
+
Metrics: To provide a comprehensive analysis of the results, we utilize Translation Error $(t_{err})$ , Rotation Error $(r_{err})$ , Absolute Trajectory Error (ATE), and Scale Error $(s_{err})$ [22, 37]. $t_{err}$ and $r_{err}$ compute the average translation error (\%) and rotation error $(^{\circ} / 100\mathrm{m})$ across all possible subsequences within a test sequence with lengths ranging from 100 to 800 meters. ATE measures the deviation between the estimated trajectory and the ground-truth trajectory by comparing the positions of corresponding poses, making it an effective metric for measuring drift over time. The scale error $(s_{err})$ measures the average discrepancy between the predicted translation and the ground truth translation. Combined with rotation error $(r_{err})$ and Absolute Trajectory Error (ATE), it allows us to effectively determine whether accumulated drift is attributed to scale inaccuracies or rotational deviations.
|
| 149 |
+
|
| 150 |
+
Table 1. Comparative Analysis Across Datasets. We compare ZeroVO variants with existing baselines using standard metrics of translation, rotation, absolute trajectory, and scale errors. All methods are provided with estimated camera intrinsics and metric depth. ZeroVO+ is our model trained with further data using semi-supervision, and LiteZeroVO+ is a smaller model variant for resource-constrained settings. Our models demonstrate strong performance across metrics and datasets, particularly in metric translation estimation. As highlighted by the scale error, GTA and nuScenes contain challenging evaluation settings, including nighttime, weather variations, haze, and reflections. We note that TartanVO and DPVO baselines (in gray) only predict up-to-scale motion and use privileged information, i.e., ground-truth scale alignment in evaluation.
|
| 151 |
+
|
| 152 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">KITTI 00-10</td><td colspan="4">nuScenes</td><td colspan="4">Argoverse</td><td colspan="4">GTA</td></tr><tr><td>\(t_{err}\)</td><td>\(r_{err}\)</td><td>ATE</td><td>\(s_{err}\)</td><td>\(t_{err}\)</td><td>\(r_{err}\)</td><td>ATE</td><td>\(s_{err}\)</td><td>\(t_{err}\)</td><td>\(r_{err}\)</td><td>ATE</td><td>\(s_{err}\)</td><td>\(t_{err}\)</td><td>\(r_{err}\)</td><td>ATE</td><td>\(s_{err}\)</td></tr><tr><td>XVO [37]</td><td>16.82</td><td>3.84</td><td>168.43</td><td>0.17</td><td>12.75</td><td>5.11</td><td>8.30</td><td>0.16</td><td>9.13</td><td>4.86</td><td>5.70</td><td>0.12</td><td>25.56</td><td>12.64</td><td>28.02</td><td>0.21</td></tr><tr><td>M+DS [27]</td><td>14.22</td><td>2.72</td><td>154.77</td><td>0.09</td><td>17.08</td><td>1.46</td><td>10.46</td><td>0.18</td><td>16.67</td><td>1.79</td><td>8.51</td><td>0.13</td><td>23.53</td><td>10.38</td><td>12.96</td><td>0.26</td></tr><tr><td>ZeroVO</td><td>7.69</td><td>2.72</td><td>105.07</td><td>0.07</td><td>10.98</td><td>4.48</td><td>6.79</td><td>0.14</td><td>6.83</td><td>3.13</td><td>4.10</td><td>0.11</td><td>14.74</td><td>10.63</td><td>8.55</td><td>0.17</td></tr><tr><td>ZeroVO+</td><td>6.81</td><td>2.69</td><td>104.69</td><td>0.06</td><td>9.74</td><td>4.37</td><td>6.03</td><td>0.12</td><td>4.64</td><td>2.83</td><td>3.05</td><td>0.09</td><td>13.42</td><td>7.99</td><td>8.24</td><td>0.17</td></tr><tr><td>LiteZeroVO+</td><td>8.85</td><td>2.90</td><td>118.54</td><td>0.08</td><td>11.57</td><td>4.44</td><td>6.87</td><td>0.13</td><td>7.65</td><td>3.82</td><td>5.28</td><td>0.11</td><td>15.93</td><td>12.16</td><td>11.26</td><td>0.18</td></tr><tr><td>TartanVO [70]</td><td>13.85</td><td>3.27</td><td>103.07</td><td>-</td><td>10.27</td><td>6.35</td><td>6.26</td><td>-</td><td>11.17</td><td>5.30</td><td>7.03</td><td>-</td><td>10.56</td><td>9.35</td><td>3.82</td><td>-</td></tr><tr><td>DPVO [63]</td><td>8.31</td><td>2.37</td><td>78.53</td><td>-</td><td>4.34</td><td>2.85</td><td>2.66</td><td>-</td><td>2.66</td><td>1.25</td><td>1.59</td><td>-</td><td>12.65</td><td>10.67</td><td>4.33</td><td>-</td></tr></table>
|
| 153 |
+
|
| 154 |
+
Table 2. Ablation Analysis for Model and Training Components. We analyze various model components: Flow module (F), Depth module (D), Language prior (L), Semi-supervised training (S), and Pseudo-label Selection (P). Flow, depth, and language correspond to the proposed supervised ZeroVO model. Results with additional semi-supervised training are shown as ZeroVO+ (showing state-of-the-art performance by integrating all of our proposed components).
|
| 155 |
+
|
| 156 |
+
<table><tr><td rowspan="2">F D L S P</td><td colspan="4">KITTI 00-10</td><td colspan="4">nuScenes</td><td colspan="4">Argoverse</td><td colspan="4">GTA</td><td></td></tr><tr><td>\( t_{err} \)</td><td>\( r_{err} \)</td><td>ATE</td><td>\( s_{err} \)</td><td>\( t_{err} \)</td><td>\( r_{err} \)</td><td>ATE</td><td>\( s_{err} \)</td><td>\( t_{err} \)</td><td>\( r_{err} \)</td><td>ATE</td><td>\( s_{err} \)</td><td>\( t_{err} \)</td><td>\( r_{err} \)</td><td>ATE</td><td>\( s_{err} \)</td><td></td></tr><tr><td>✓</td><td></td><td>18.76</td><td>5.49</td><td>174.24</td><td>0.18</td><td>19.40</td><td>7.42</td><td>12.54</td><td>0.22</td><td>12.23</td><td>6.34</td><td>9.42</td><td>0.20</td><td>25.68</td><td>15.52</td><td>25.38</td><td>0.25</td></tr><tr><td>✓✓</td><td></td><td>8.99</td><td>2.92</td><td>123.42</td><td>0.08</td><td>12.26</td><td>5.23</td><td>8.40</td><td>0.15</td><td>8.62</td><td>4.11</td><td>5.71</td><td>0.11</td><td>16.76</td><td>12.75</td><td>12.37</td><td>0.19</td></tr><tr><td>✓✓✓</td><td></td><td>7.69</td><td>2.72</td><td>105.07</td><td>0.07</td><td>10.98</td><td>4.48</td><td>6.79</td><td>0.14</td><td>6.83</td><td>3.13</td><td>4.10</td><td>0.11</td><td>14.74</td><td>10.63</td><td>8.55</td><td>0.17</td></tr><tr><td>✓✓✓✓✓</td><td></td><td>9.11</td><td>2.88</td><td>117.49</td><td>0.08</td><td>12.25</td><td>5.39</td><td>7.53</td><td>0.14</td><td>7.98</td><td>3.95</td><td>5.13</td><td>0.11</td><td>16.49</td><td>11.95</td><td>10.27</td><td>0.18</td></tr><tr><td>✓✓✓✓✓✓</td><td></td><td>6.81</td><td>2.69</td><td>104.69</td><td>0.06</td><td>9.74</td><td>4.37</td><td>6.03</td><td>0.12</td><td>4.64</td><td>2.83</td><td>3.05</td><td>0.09</td><td>13.42</td><td>7.99</td><td>8.24</td><td>0.17</td></tr></table>
|
| 157 |
+
|
| 158 |
+
# 4.2. Results
|
| 159 |
+
|
| 160 |
+
Generalization Performance: To examine the generalization ability of our model, we evaluate it on entire sequences on KITTI, the unseen regions in nuScenes, and the simulated dataset GTA. Table 1 compares ZeroVO+ with prior baselines in a zero-shot setting. For a fair comparison of the zero-shot performance, all models are provided with the same estimated camera intrinsics and metric depth (if required). TartanVO and DPVO can only estimate rotation and require scale alignment with ground-truth translation to reconstruct the trajectory at a real-world scale. From the results in Table 1, our model achieves superior performance across nearly all metrics on the four datasets. It is important to note that sequences on KITTI are significantly longer compared to those in other datasets, making them more prone to accumulating large drift (i.e., high ATE). Our method accurately predicts rotation and translation scale on KITTI, resulting in the lowest ATE among all baselines, even without incorporating multi-frame temporal optimization. The results on the GTA dataset further demonstrate the strong generalization capability of our model, achiev-
|
| 161 |
+
|
| 162 |
+
ing ATE results comparable to scale-aligned DPVO, which leverages privileged evaluation. In Table 3, we divide the remaining regions in nuScenes into different subsets based on various weather conditions: day, night, rain, and light. The strong light scenario is caused by severe light reflections. We find that night and strong light conditions present the most challenging scenarios, as it is difficult for the model to detect and extract valuable information. We demonstrate that our model achieves the best performance across all conditions, highlighting its robustness against external noise.
|
| 163 |
+
|
| 164 |
+
Ablation Study: In Table 2, we study the roles of each module in our model structure. We begin by analyzing the impact of our depth module. When the model is equipped with only the flow module, the model struggles to generalize to unseen scenarios, particularly in terms of scale estimation. This outcome is expected, as predicting scale from a single image without any additional context is an ill-posed problem. By incorporating the depth module, the model demonstrates improvements across all metrics, particularly in scale estimation. This improvement indicates that by concatenating the estimated metric depth with the intrinsic image, the model can effectively learn coherent
|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
Figure 2. Qualitative Results on KITTI. We show trajectory prediction results across the four most complex driving sequences (00, 02, 05, and 08) from the KITTI dataset. Each subplot illustrates the trajectories generated by our proposed model and the baseline models alongside the ground truth trajectory. The qualitative results demonstrate that our approach achieves the highest alignment with the ground truth, particularly in challenging turns and extended straight paths. These findings highlight the robustness of our method in handling complex and diverse driving scenarios.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
X (m)
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
|
| 174 |
+

|
| 175 |
+
|
| 176 |
+
3D spatial information, even in the presence of noise, and accurately estimate scale. It is also noteworthy that the depth module improves rotation estimation performance. This demonstrates that leveraging both depth and optical flow to unproject 3D scene flow provides crucial 3D correspondence information that leads to improved rotation estimation. The experiment with textual information further demonstrates the model's robustness against noise. Under challenging driving conditions, such as numerous dynamic objects, darkness, strong light reflections, rain, and fog, the estimated camera intrinsics and metric depth are highly susceptible to becoming unreliable. The general text description is able to provide extra 3D information, such as object layouts and movements, which helps the model maintain robustness in highly noisy environments. At last, we demonstrate the effectiveness of our semi-supervision approach using pseudo-label selection. Without pseudo-label selection, we observe a drop in the model's performance compared to the supervised trained model. This decline is due to the introduction of excessive pseudo-labeled examples with redundancy and uncertain label quality, which hinders model training. Our pseudo-label selection process effectively filters out highly redundant and low-quality pseudolabeled examples, achieving the best performance among all zero-shot metric-scale models. Further ablations and analysis can be found in our supplementary.
|
| 177 |
+
|
| 178 |
+
Qualitative Analysis: Fig. 2 depicts the most complex and longest trajectories on KITTI, compared with the two best-performing baselines. The trajectory of DPVO is aligned with the ground-truth translation after scale adjustment. Therefore, it is straightforward to see how inaccurate rotation estimation results in drift accumulation. A comparison between the results of DPVO and M+DS reveals how inaccuracies in translation estimation further exacerbate drift accumulation. By leveraging general textual information and unprojecting 2D data into 3D space, our model effectively extracts more accurate and inherent correspondence fea
|
| 179 |
+
|
| 180 |
+
Table 3. Condition Breakdown on nuScenes. We show results breakdown (ATE) over scenes categorized by weather and lens settings. We sample from nuScenes the Day, Night, and Rainy scenes, along with particularly challenging frames that include severe light reflections. Our ZeroVO+ model performs best overall. We note that TartanVO and DPVO baselines only predict up-to-scale motion and use ground-truth scale alignment in inference.
|
| 181 |
+
|
| 182 |
+
<table><tr><td>Method</td><td>Day</td><td>Night</td><td>Rainy</td><td>Light</td></tr><tr><td>XVO [37]</td><td>6.61</td><td>14.41</td><td>15.99</td><td>15.73</td></tr><tr><td>M+DS [27]</td><td>6.08</td><td>17.19</td><td>17.49</td><td>18.54</td></tr><tr><td>ZeroVO</td><td>3.90</td><td>10.33</td><td>12.63</td><td>13.33</td></tr><tr><td>ZeroVO+</td><td>3.60</td><td>10.26</td><td>10.10</td><td>11.15</td></tr></table>
|
| 183 |
+
|
| 184 |
+
tures, which enhance robustness even when the estimated depth or camera intrinsics are noisy.
|
| 185 |
+
|
| 186 |
+
# 5. Conclusion
|
| 187 |
+
|
| 188 |
+
We introduced ZeroVO, a novel transformer-based framework designed to tackle the challenge of visual odometry generalization under adverse and unseen conditions. ZeroVO integrates rich multimodal cues—spanning geometry, language, and vision—within a unified architecture to enhance robustness and adaptability in complex environments. Its camera-agnostic design, combined with a semisupervised training paradigm, enables effective handling of noisy data and seamless adaptation to novel scenarios. Extensive evaluation across diverse and challenging benchmarks demonstrates that ZeroVO establishes a new standard for zero-shot VO performance, underscoring its promise for real-world deployment without the need for camera recalibration or domain-specific tuning.
|
| 189 |
+
|
| 190 |
+
# 6. Acknowledgments
|
| 191 |
+
|
| 192 |
+
We thank the Red Hat Collaboratory (awards 2024-01-RH02, 2024-01-RH07) and National Science Foundation (IIS-2152077) for supporting this research.
|
| 193 |
+
|
| 194 |
+
# References
|
| 195 |
+
|
| 196 |
+
[1] Aditya Agarwal, Daniel Maturana, and Sebastian Scherer. Visual odometry in smoke occluded environments. Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, Tech. Rep. CMU-RI-TR-15-07, 2014. 1
|
| 197 |
+
[2] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In CVPR, 2015. 2
|
| 198 |
+
[3] Ali Azarbayejani and Alex P Pentland. Recursive estimation of motion, structure, and focal length. PAMI, 1995. 1
|
| 199 |
+
[4] David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785, 2019. 2
|
| 200 |
+
[5] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. NeurIPS, 2019. 2
|
| 201 |
+
[6] Dominique Brunet, Edward R Vrscay, and Zhou Wang. On the mathematical properties of the structural similarity index. T-IP, 2011. 5
|
| 202 |
+
[7] Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, José Neira, Ian Reid, and John J Leonard. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. T-RO, 2016. 1, 2
|
| 203 |
+
[8] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In CVPR, 2020. 6
|
| 204 |
+
[9] Benjamin Caine, Rebecca Roelofs, Vijay Vasudevan, Jiquan Ngiam, Yuning Chai, Zhifeng Chen, and Jonathon Shlens. Pseudo-labeling for scalable 3d object detection. In arXiv preprint arXiv:2103.02093, 2021. 2, 5
|
| 205 |
+
[10] Carlos Campos, Richard Elvira, Juan J Gómez Rodríguez, José MM Montiel, and Juan D Tardós. Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam. T-RO, 2021. 1
|
| 206 |
+
[11] Andrea Ceccarelli and Francesco Secci. RGB cameras failures and their effects in autonomous driving applications. T-DSC, 2022. 1
|
| 207 |
+
[12] Wei-Ge Chen, Irina Spiridonova, Jianwei Yang, Jianfeng Gao, and Chunyuan Li. Llava-interactive: An all-in-one demo for image chat, segmentation, generation and editing. arXiv preprint arXiv:2311.00571, 2023. 2
|
| 208 |
+
[13] Alessandro Chiuso, Paolo Favaro, Hailin Jin, and Stefano Soatto. Structure from motion causally integrated over time. PAMI, 2002. 1
|
| 209 |
+
[14] Kevin Clark, Minh-Thang Luong, Christopher D Manning, and Quoc V Le. Semi-supervised sequence modeling with cross-view training. arXiv preprint arXiv:1809.08370, 2018. 2
|
| 210 |
+
[15] Bo Dai and Dahua Lin. Contrastive learning for image captioning. NeurIPS, 30, 2017. 2
|
| 211 |
+
[16] Ernst Dieter Dickmanns. Dynamic vision for perception and control of motion. Springer, 2007. 1
|
| 212 |
+
|
| 213 |
+
[17] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In CoRL, 2017. 6
|
| 214 |
+
[18] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. 1, 4
|
| 215 |
+
[19] Jakob Engel, Vladlen Koltun, and Daniel Cremers. Direct sparse odometry. In PAMI, 2017. 1
|
| 216 |
+
[20] Friedrich Fraundorfer and Davide Scaramuzza. Visual odometry: Part i: The first 30 years and fundamentals. RAM, 2011. 1
|
| 217 |
+
[21] Jiyang Gao, Jiang Wang, Shengyang Dai, Li-Jia Li, and Ram Nevatia. Note-rcnn: Noise tolerant ensemble rcnn for semi-supervised object detection. In CVPR, 2019. 2
|
| 218 |
+
[22] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012. 6
|
| 219 |
+
[23] Vitor Guizilini, Igor Vasiljevic, Dian Chen, Rares Ambrus, and Adrien Gaidon. Towards zero-shot scale-aware monocular depth estimation. In CVPR, 2023. 1
|
| 220 |
+
[24] Suchin Gururangan, Tam Dang, Dallas Card, and Noah A Smith. Variational pretraining for semi-supervised text classification. arXiv preprint arXiv:1906.02242, 2019. 2
|
| 221 |
+
[25] Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don't stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. 2
|
| 222 |
+
[26] Noriaki Hirose, Fei Xia, Roberto Martín-Martín, Amir Sadeghian, and Silvio Savarese. Deep visual mpc-policy learning for navigation. RA-L, 4(4), 2019. 2
|
| 223 |
+
[27] Mu Hu, Wei Yin, Chi Zhang, Zhipeng Cai, Xiaoxiao Long, Hao Chen, Kaixuan Wang, Gang Yu, Chunhua Shen, and Shaojie Shen. Metric3d v2: A versatile monocular geometric foundation model for zero-shot metric depth and surface normal estimation. PAMI, 2024. 1, 2, 4, 5, 6, 7, 8
|
| 224 |
+
[28] Young Kyun Jang and Nam Ik Cho. Generalized product quantization network for semi-supervised image retrieval. In CVPR, 2020. 2
|
| 225 |
+
[29] Jisoo Jeong, Seungeui Lee, Jeesoo Kim, and Nojun Kwak. Consistency-based semi-supervised learning for object detection. In NeurIPS, 2019. 5
|
| 226 |
+
[30] Takayuki Kanai, Igor Vasiljevic, Vitor Guizilini, and Kazuhiro Shintani. Self-supervised geometry-guided initialization for robust monocular visual odometry. arXiv preprint arXiv:2406.00929, 2024. 2
|
| 227 |
+
[31] Nimet Kaygusuz, Oscar Mendez, and Richard Bowden. Mdn-vo: Estimating visual odometry with confidence. In IROS, 2021. 1
|
| 228 |
+
[32] Alex Kendall and Roberto Cipolla. Geometric loss functions for camera pose regression with deep learning. In CVPR, 2017. 1
|
| 229 |
+
[33] Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network for real-time 6-dof camera relocalization. In ICCV, 2015. 1
|
| 230 |
+
|
| 231 |
+
[34] Hee Jae Kim and Eshed Ohn-Bar. Motion diversification networks. In CVPR, 2024. 2
|
| 232 |
+
[35] Pyojin Kim, Hyon Lim, and H Jin Kim. Robust visual odometry to irregular illumination changes with rgb-d camera. In IROS, 2015. 1
|
| 233 |
+
[36] Andrew V Knyazev and Merico E Argentati. Principal angles between subspaces in an a-based scalar product: algorithms and perturbation estimates. SIAM Journal on Scientific Computing, 2002. 5
|
| 234 |
+
[37] Lei Lai, Zhongkai Shangguan, Jimuyang Zhang, and Eshed Ohn-Bar. XVO: Generalized visual odometry via cross-modal self-training. In ICCV, 2023. 1, 2, 4, 5, 6, 7, 8
|
| 235 |
+
[38] Lei Lai, Eshed Ohn-Bar, Sanjay Arora, and John Seon Keun Yi. Uncertainty-guided never-ending learning to drive. In CVPR, 2024. 2
|
| 236 |
+
[39] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICMLW, 2013. 5
|
| 237 |
+
[40] Qimai Li, Xiao-Ming Wu, Han Liu, Xiaotong Zhang, and Zhichao Guan. Label efficient semi-supervised learning via graph filtering. In CVPR, 2019. 5
|
| 238 |
+
[41] Ruihao Li, Sen Wang, Zhiqiang Long, and Dongbing Gu. Undeepvo: Monocular visual odometry through unsupervised deep learning. In ICRA, 2018. 5
|
| 239 |
+
[42] Lahav Lipson, Zachary Teed, and Jia Deng. Deep patch visual slam. In ECCV, 2024. 1, 2
|
| 240 |
+
[43] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In CVPR, 2024. 1, 2
|
| 241 |
+
[44] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava next: Improved reasoning,OCR, and world knowledge, 2024. 4
|
| 242 |
+
[45] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. NeurIPS, 2024. 1, 2
|
| 243 |
+
[46] Shilong Liu, Hao Cheng, Haotian Liu, Hao Zhang, Feng Li, Tianhe Ren, Xueyan Zou, Jianwei Yang, Hang Su, Jun Zhu, et al. Llava-plus: Learning to use tools for creating multimodal agents. arXiv preprint arXiv:2311.05437, 2023. 2
|
| 244 |
+
[47] Reza Mahjourian, Martin Wicke, and Anelia Angelova. Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. In CVPR, 2018. 5
|
| 245 |
+
[48] Kanti V Mardia, Peter E Jupp, and KV Mardia. Directional statistics. 2000. 4
|
| 246 |
+
[49] Nico Messikommer, Giovanni Cioffi, Mathias Gehrig, and Davide Scaramuzza. Reinforcement learning meets visual odometry. ECCV, 2024. 2
|
| 247 |
+
[50] David Mohlin, Josephine Sullivan, and Gérald Bianchi. Probabilistic orientation estimation with matrix fisher distributions. In NeurIPS, 2020. 4
|
| 248 |
+
[51] Raul Mur-Artal and Juan D Tardós. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. T-RO, 2017. 1, 3
|
| 249 |
+
[52] Raul Mur-Artal, Jose Maria Martinez Montiel, and Juan D Tardos. Orb-slam: a versatile and accurate monocular slam system. T-RO, 2015. 1
|
| 250 |
+
|
| 251 |
+
[53] Luigi Piccinelli, Yung-Hsu Yang, Christos Sakaridis, Mattia Segu, Siyuan Li, Luc Van Gool, and Fisher Yu. Unidepth: Universal monocular metric depth estimation. In CVPR, 2024. 1, 2
|
| 252 |
+
[54] Alberto Pretto, Emanuele Menegatti, Maren Bennewitz, Wolfram Burgard, and Enrico Pagello. A visual odometry framework robust to motion blur. In ICRA, 2009. 1
|
| 253 |
+
[55] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP. 1, 4, 5
|
| 254 |
+
[56] Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. In ICLR, 2021. 5
|
| 255 |
+
[57] Chris Rockwell, Justin Johnson, and David F Fouhey. The 8-point algorithm as an inductive bias for relative pose prediction by vits. In 3DV. IEEE, 2022. 1
|
| 256 |
+
[58] Zhengxiang Shi, Francesco Tonolini, Nikolaos Aletras, Emine Yilmaz, Gabriella Kazai, and Yunlong Jiao. Rethinking semi-supervised learning with language models. arXiv preprint arXiv:2305.13002, 2023. 2
|
| 257 |
+
[59] Nasim Souly, Concetto Spampinato, and Mubarak Shah. Semi supervised semantic segmentation using generative adversarial network. In ICCV, 2017. 5
|
| 258 |
+
[60] Chengzhou Tang and Ping Tan. BA-net: Dense bundle adjustment network. arXiv preprint arXiv:1806.04807, 2018. 3
|
| 259 |
+
[61] Yihe Tang, Weifeng Chen, Yijun Luo, and Yuting Zhang. Humble teachers teach better students for semi-supervised object detection. In CVPR, 2021. 2
|
| 260 |
+
[62] Zachary Teed and Jia Deng. DROID-SLAM: Deep visual slam for monocular, stereo, and rgb-d cameras. NeurIPS, 2021. 1, 2, 3, 6
|
| 261 |
+
[63] Zachary Teed, Lahav Lipson, and Jia Deng. Deep patch visual odometry. NeurIPS. 1, 2, 3, 6, 7
|
| 262 |
+
[64] Ran Tian, Boyi Li, Xinshuo Weng, Yuxiao Chen, Edward Schmerling, Yue Wang, Boris Ivanovic, and Marco Pavone. Tokenize the world into object-level knowledge to address long-tail events in autonomous driving. arXiv preprint arXiv:2407.00959, 2024. 2
|
| 263 |
+
[65] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 1, 4
|
| 264 |
+
[66] Sudheendra Vijayanarasimhan, Susanna Ricco, Cordelia Schmid, Rahul Sukthankar, and Katerina Fragkiadaki. Learning of structure and motion from video. In CVPR, 2017. 2
|
| 265 |
+
[67] He Wang, Yezhen Cong, Or Litany, Yue Gao, and Leonidas J Guibas. 3DIOUMatch: Leveraging IoU prediction for semi-supervised 3D object detection. In CVPR, 2021. 5
|
| 266 |
+
[68] Sen Wang, Ronald Clark, Hongkai Wen, and Niki Trigoni. Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In ICRA, 2017. 1
|
| 267 |
+
[69] Wenshan Wang, Delong Zhu, Xiangwei Wang, Yaoyu Hu, Yuheng Qiu, Chen Wang, Yafei Hu, Ashish Kapoor, and Sebastian Scherer. Tartanair: A dataset to push the limits of visual slam. In IROS, 2020. 1
|
| 268 |
+
|
| 269 |
+
[70] Wenshan Wang, Yaoyu Hu, and Sebastian Scherer. Tartanvo: A generalizable learning-based vo. In CoRL, 2021. 1, 2, 3, 4, 6, 7
|
| 270 |
+
[71] Yan Wang, Wei-Lun Chao, Divyansh Garg, Bharath Hariharan, Mark Campbell, and Kilian Q Weinberger. Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving. In CVPR, 2019. 4
|
| 271 |
+
[72] Benjamin Wilson, William Qi, Tanmay Agarwal, John Lambert, Jagjeet Singh, Siddhesh Khandelwal, Bowen Pan, Ratnesh Kumar, Andrew Hartnett, Jhony Kaesemodel Pontes, Deva Ramanan, Peter Carr, and James Hays. Argoverse 2: Next generation datasets for self-driving perception and forecasting, 2023. 6
|
| 272 |
+
[73] I Zeki Yalniz, Herve Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi-supervised learning for image classification. arXiv preprint arXiv:1905.00546, 2019. 5
|
| 273 |
+
[74] Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Bin Xiao, Ce Liu, Lu Yuan, and Jianfeng Gao. Unified contrastive learning in image-text-label space. In CVPR, 2022. 2
|
| 274 |
+
[75] Jiazhi Yang, Shenyuan Gao, Yihang Qiu, Li Chen, Tianyu Li, Bo Dai, Kashyap Chitta, Penghao Wu, Jia Zeng, Ping Luo, et al. Generalized predictive model for autonomous driving. In CVPR, 2024. 5
|
| 275 |
+
[76] Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything: Unleashing the power of large-scale unlabeled data. In CVPR, 2024. 2
|
| 276 |
+
[77] Nan Yang, Lukas von Stumberg, Rui Wang, and Daniel Cremers. D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In CVPR, 2020. 1
|
| 277 |
+
[78] Xiangli Yang, Zixing Song, Irwin King, and Zenglin Xu. A survey on deep semi-supervised learning. arXiv preprint arXiv:2103.00550, 2021. 5
|
| 278 |
+
[79] Weicai Ye, Xinyue Lan, Shuo Chen, Yuhang Ming, Xingyuan Yu, Hujun Bao, Zhaopeng Cui, and Guofeng Zhang. Pvo: Panoptic visual odometry. In CVPR, 2023. 2
|
| 279 |
+
[80] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from a single image. In CVPR, 2023. 1
|
| 280 |
+
[81] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In CVPR, 2016. 2
|
| 281 |
+
[82] Ziyao Zeng, Daniel Wang, Fengyu Yang, Hyoungseob Park, Stefano Soatto, Dong Lao, and Alex Wong. Wordepth: Variational language prior for monocular depth estimation. In CVPR, 2024. 2
|
| 282 |
+
[83] Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, and Ian Reid. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In CVPR, 2018. 5
|
| 283 |
+
[84] Huangying Zhan, Chamara Saroj Weerasekera, Jia-Wang Bian, Ravi Garg, and Ian Reid. Df-vo: What should be learnt for visual odometry? arXiv preprint arXiv:2103.00933, 2021. 1
|
| 284 |
+
[85] Hui Zhang, K Wong Kwan-yee, and Guoqiang Zhang. Camera calibration from images of spheres. PAMI, 2007. 2
|
| 285 |
+
|
| 286 |
+
[86] Jimuyang Zhang, Ruizhao Zhu, and Eshed Ohn-Bar. Selfd: Self-learning large-scale driving policies from the web. In CVPR, 2022. 2
|
| 287 |
+
[87] Jimuyang Zhang, Zanming Huang, Arjit Ray, and Eshed Ohn-Bar. Feedback-guided autonomous driving. In CVPR, 2024. 2
|
| 288 |
+
[88] Yueqiang Zhang, Langming Zhou, Haibo Liu, and Yang Shang. A flexible online camera calibration using line segments. Journal of Sensors, 2016. 2
|
| 289 |
+
[89] Z. Zhang. A flexible new technique for camera calibration. PAMI, 2000.
|
| 290 |
+
[90] Zhengyou Zhang. Camera calibration with one-dimensional objects. PAMI, 2004. 2
|
| 291 |
+
[91] Shengyu Zhao, Yilun Sheng, Yue Dong, Eric I Chang, Yan Xu, et al. Maskflownet: Asymmetric feature matching with learnable occlusion mask. In CVPR, 2020. 4, 5
|
| 292 |
+
[92] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017. 5
|
| 293 |
+
[93] Ruizhao Zhu, Peng Huang, Eshed Ohn-Bar, and Venkatesh Saligrama. Learning to drive anywhere. In CoRL, 2023. 2
|
| 294 |
+
[94] Shengjie Zhu, Abhinav Kumar, Masa Hu, and Xiaoming Liu. Tame a wild camera: in-the-wild monocular camera calibration. NeurIPS, 2024. 2, 4, 5
|
| 295 |
+
[95] Zihan Zhu, Songyou Peng, Viktor Larsson, Zhaopeng Cui, Martin R Oswald, Andreas Geiger, and Marc Pollefeys. Nicer-slam: Neural implicit scene encoding for rgb slam. In 3DV, 2024. 2
|
CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bc215173194754cb42acaf3e94075585b8450ae0b174a62c3041f75e923f1444
|
| 3 |
+
size 379599
|
CVPR/2025/ZeroVO_ Visual Odometry with Minimal Assumptions/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cfd5ea67ab07b583c2074364abc3ee7da4719b05308945681d700e2989b96b8
|
| 3 |
+
size 370313
|
CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/64d1b96f-e41e-4ee5-88fe-92864a01628b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:808f9123defe82c321607d2d123bd2b8d945babb2cbbc1485d40d28274d67503
|
| 3 |
+
size 78646
|
CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/64d1b96f-e41e-4ee5-88fe-92864a01628b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8803d6fd266dcb6b5fee4653926228c1bdc80248433b9bbe1a70afef407bbfef
|
| 3 |
+
size 96753
|
CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/64d1b96f-e41e-4ee5-88fe-92864a01628b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cfca82469d6d6684f22efd92bcdfe7c82d7a7c8fb94cda414c283a7485040f59
|
| 3 |
+
size 4537492
|
CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/full.md
ADDED
|
@@ -0,0 +1,287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ZoomLDM: Latent Diffusion Model for multi-scale image generation
|
| 2 |
+
|
| 3 |
+
Srikar Yellapragada* Alexandros Graikos* Kostas Triaridis
|
| 4 |
+
Prateek Prasanna Rajarsi Gupta Joel Saltz Dimitris Samaras
|
| 5 |
+
Stony Brook University
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Diffusion models have revolutionized image generation, yet several challenges restrict their application to large-image domains, such as digital pathology and satellite imagery. Given that it is infeasible to directly train a model on 'whole' images from domains with potential gigapixel sizes, diffusion-based generative methods have focused on synthesizing small, fixed-size patches extracted from these images. However, generating small patches has limited applicability since patch-based models fail to capture the global structures and wider context of large images, which can be crucial for synthesizing (semantically) accurate samples. To overcome this limitation, we present ZoomLDM, a diffusion model tailored for generating images across multiple scales. Central to our approach is a novel magnification-aware conditioning mechanism that utilizes self-supervised learning (SSL) embeddings and allows the diffusion model to synthesize images at different 'zoom' levels, i.e., fixed-size patches extracted from large images at varying scales. ZoomLDM synthesizes coherent histopathology images that remain contextually accurate and detailed at different zoom levels, achieving state-of-the-art image generation quality across all scales and excelling in the data-scarce setting of generating thumbnails of entire large images. The multi-scale nature of ZoomLDM unlocks additional capabilities in large image generation, enabling computationally tractable and globally coherent image synthesis up to $4096 \times 4096$ pixels and $4 \times$ super-resolution. Additionally, multi-scale features extracted from ZoomLDM are highly effective in multiple instance learning experiments.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
Diffusion models have achieved remarkable success in photorealistic image synthesis [3], benefiting from the availability of vast multi-modal datasets [5, 40] and sophisticated conditioning techniques [20, 36]. Latent Diffusion models (LDMs) [38] have further advanced high-resolution im
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1. ZoomLDM can generate synthetic image patches at multiple scales (left). It can generate large images that preserve spatial context (center) and perform super-resolution (right), without any additional training. Large images from prior work [17, 26] suffer from blurriness and lack of global context.
|
| 17 |
+
|
| 18 |
+
age generation by introducing a two-step process that first compresses the images with a learned encoder and then trains the generative diffusion model in that encoder's latent space. In the natural image domain, LDMs like Stable Diffusion XL [36], which generates $1024 \times 1024$ images, have made high-resolution generation fast and cheap. Although such models demonstrate the potential of further scaling image diffusion to larger sizes, large-image domains such as digital histopathology and satellite imagery are beyond their feasible scope as images there are typically in the gigapixel scale (e.g. $32,000 \times 32,000$ pixels).
|
| 19 |
+
|
| 20 |
+
Apart from scale, large-image domains also lack paired image-annotation data with sufficient detail, which has been key to the success of text-to-image diffusion models. Without access to a conditioning signal during training and inference, the performance of diffusion models degrades significantly [32]. At the same time, obtaining annotations for large images can be complex as it is both a laborious process for specialized fields, such as medical images, and of
|
| 21 |
+
|
| 22 |
+
ten ambiguous since annotators can describe different features at different scales. A satellite image text caption corresponding to 'water', when viewed from up close, can turn into both the 'a lake' and 'a river' when viewed from further away, making it necessary to annotate at both levels.
|
| 23 |
+
|
| 24 |
+
Previous works have tried to address the issues of large image sizes and conditioning but are limited in applicability. Harb et al. [18] introduced a pixel-level diffusion model that can accommodate multiple scales (named magnifications) in medical images but lacked conditioning - a crucial element for achieving better image quality and enabling downstream tasks [11, 31, 47]. Graikos et al. [17] utilized embeddings from self-supervised learning (SSL) models to mitigate the need for costly annotations in large-image domains, but only trained a model to generate small patches. Recognizing that none of these methods can tackle the important problem of controllable high-quality large-image synthesis, we propose a unified solution, ZoomLDM.
|
| 25 |
+
|
| 26 |
+
To address large image sizes, we propose training a scale-conditioned diffusion model that learns to generate images at different 'zoom' levels, which correspond to magnifications in histopathology images (Fig. 1 (a)). By conditioning the model on the scale, we control the level of detail contained within each generated pixel. To control generation, we also incorporate a conditioning signal from a self-supervised learning (SSL) encoder. While SSL encoders are great at producing meaningful representations for images, using them in this multi-scale setting is nontrivial as they are usually trained to extract information from patches at a single scale. To share information across scales, we introduce the idea of a cross-magnification latent space; a shared latent space where the embeddings of all scales lie. We implement this with a trainable summarizer module that processes the array of SSL embeddings that describe an image, projecting them to the shared latent space that captures dependencies across all magnifications.
|
| 27 |
+
|
| 28 |
+
We train ZoomLDM on multi-scale histopathology using SSL embeddings from state-of-the-art image encoders as guidance. We find that sharing model weights across all scales significantly boosts the generation quality for scales where data is limited. To eliminate our model's reliance on SSL embeddings when sampling new images, we also train a Conditioning Diffusion Model (CDM) that generates conditions given a scale. This combined approach enables us to synthesize novel high-quality images at all scales.
|
| 29 |
+
|
| 30 |
+
With a multi-scale model, we hypothesize that jointly sampling images across scales would be beneficial for creating coherent images at multiple scales. However, this is challenging because each scale requires its own level of detail, and these details must be aligned across scales. To that end, we propose a novel joint multi-scale sampling approach that exploits ZoomLDM's multi-scale nature. Our cross-magnification latent space provides the nec
|
| 31 |
+
|
| 32 |
+
essary detail across scales, enabling large image generation and super-resolution without additional training. This approach effectively constructs a coherent image pyramid, making super-resolution and high-quality large image generation feasible. Our method surpasses previous approaches [17, 26], which struggled in generating either local details or global structure, and presents the first practical $4096 \times 4096$ image generation paradigm in histopathology (see supplementary for a comprehensive evaluation).
|
| 33 |
+
|
| 34 |
+
Finally, we probe ZoomLDM to show that features extracted from our model are highly expressive and suitable for multiple instance learning (MIL) tasks in digital histopathology. Prior work [7, 27] has demonstrated the effectiveness of multi-scale features for MIL, but these methods required training separate encoders for each scale. In contrast, ZoomLDM offers an efficient solution by enabling seamless multi-scale feature extraction using a single model. We condition ZoomLDM with UNI[9], a SoTA SSL model, and extract intermediate features from the denoiser at multiple magnifications for MIL. As expected, fusing ZoomLDM features from multiple scales outperforms using SoTA encoders in our MIL experiments, displaying the efficacy of its multi-scale representations. Surprisingly, our features from just the $20\times$ magnification alone surpass UNI features. We hypothesize that by learning to generate at multiple scales, ZoomLDM has learned to produce more informative features.
|
| 35 |
+
|
| 36 |
+
Our contributions are the following:
|
| 37 |
+
|
| 38 |
+
- We present ZoomLDM, the first multi-scale conditional latent diffusion model that generates images at multiple scales, achieving state-of-the-art synthetic image quality.
|
| 39 |
+
- We introduce a cross-magnification latent space, implemented with a trainable summarizer module, which provides conditioning across scales, allowing ZoomLDM to capture dependencies across magnifications.
|
| 40 |
+
- We propose a novel joint multi-scale sampling approach for generating large images that retain both global context and local fidelity, making us the first to efficiently synthesize good quality histopathology image samples of up to $4096 \times 4096$ pixels.
|
| 41 |
+
- We probe the learned multi-scale representations of ZoomLDM and demonstrate their usefulness by surpassing SoTA encoders on multiple instance learning tasks.
|
| 42 |
+
|
| 43 |
+
# 2. Related Work
|
| 44 |
+
|
| 45 |
+
Diffusion models: Since their initial introduction to image generation in Ho et al. [21], diffusion models have become the dominant generative models for images. Several works have been pivotal; notably class conditioning [31] which highlighted the importance of guidance during training and sampling and its extensions with classifier [11] and classifier-free guidance [20]. Latent Diffusion Models (LDMs) [38] proposed a training the diffusion model in a
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
Figure 2. Overview of our approach. Left: We extract $256 \times 256$ patches from large images at the initial scale ( $20 \times$ for pathology) and generate SSL embedding matrices using pretrained encoders. The large image is then progressively downsampled by a factor of 2, with patches at each scale paired with the SSL embeddings of all overlapping initial-scale patches. Right: The SSL embeddings and magnification level are fed to the Summarizer, which projects them into the cross-magnification Latent space. The diffusion model is trained to generate $256 \times 256$ patches conditioned on the Summarizer's output.
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
|
| 52 |
+
Variational Autoencoder (VAE) latent space, compressing the input images by a factor of up to $\times 8$ and enabling high-resolution and computationally practical image generation. Denoising Diffusion Implicit Models (DDIM) [43] accelerated the sampling process further, making diffusion models the preferred alternative over all previous generative model approaches (GANs, Normalizing Flows).
|
| 53 |
+
|
| 54 |
+
Diffusion Models in Large-Image Domains: Despite advances in the domain of natural images, training generative models directly at the gigapixel resolution of large image domains remains infeasible. Proposed alternatives generate images in a coarse-to-fine process by chaining models in a cascading manner [35, 39]. This has led to synthesizing images of up to $1024 \times 1024$ resolution at the cost of increased parameter count and slower inference speed. Recently, PixArt- $\Sigma$ [6] introduced an efficient transformer architecture that enables image generation of up to $4k$ using a weak-to-strong training strategy.
|
| 55 |
+
|
| 56 |
+
In the context of histopathology, previous works have focused on training fixed-size, patch diffusion models [29, 30, 46, 47], with similar approaches in satellite data [13, 41]. Patch models were used to extrapolate to large images in [2], where a pre-generated segmentation mask guides the patch model over the large image, and [17] where a patch model is conditioned on SSL embeddings that smoothly vary across the large image, synthesizing appearance locally. Both methods fail to understand global structures and rely on external sources of information for guidance.
|
| 57 |
+
|
| 58 |
+
More closely related to our work, [18] trains a pathology diffusion model conditioned on image scales. However,
|
| 59 |
+
|
| 60 |
+
limited evaluations and the absence of a conditioning mechanism restrict its applicability. A different approach by Le et al. [26] utilized an infinite-dimensional diffusion model that is resolution-free, meaning that it can be trained on arbitrarily large images. Their model can be scaled for up to $4096 \times 4096$ generation, but the final results are usually blurry and lack details.
|
| 61 |
+
|
| 62 |
+
# 3. Method
|
| 63 |
+
|
| 64 |
+
# 3.1. Unified Multi-Scale Training
|
| 65 |
+
|
| 66 |
+
We train ZoomLDM to generate fixed-size $256 \times 256$ patches extracted at different scales of large images. To guide generation, we introduce a novel conditioning mechanism allowing the model to learn multi-scale dependencies. Figure 2 provides an overview of our multi-scale training.
|
| 67 |
+
|
| 68 |
+
We begin by extracting $256 \times 256$ image patches from a large image at full resolution. Since there are no descriptive patch-level annotations in large-image domains, we resort to pre-trained SSL encoders to provide detailed descriptors in place of human labels, as in [17]. The SSL encoders in these domains are usually trained on patches from these large images - for histopathology, we utilize UNI [7], an image encoder trained on $224 \times 224$ px $20 \times$ patches. After extracting patches $I^1$ at the initial scale (=1) and SSL embeddings $e$ , we end up with a dataset of $\{I_i^1, e_i\}$ pairs.
|
| 69 |
+
|
| 70 |
+
We downsample the large image by a factor of 2 and repeat the patch extraction process, getting a new set of patches at the next zoom level. But, as previously mentioned, we cannot directly use the SSL encoder on images
|
| 71 |
+
|
| 72 |
+
from different scales - e.g., UNI is only trained on $20 \times$ images. Therefore, for scales above the first, we utilize the embeddings corresponding to the region contained within the context of the current-scale patch as conditioning. This means that we pair $I^2$ patches with the embeddings of all the $I^1$ images that they contain, giving us a dataset of $\{I_i^2\left( \begin{array}{ll}e_1 & e_2\\ e_3 & e_4 \end{array} \right)_i\}$ pairs.
|
| 73 |
+
|
| 74 |
+
By repeating this process, we construct a dataset of (image, embeddings) pairs for all scales, which we want to utilize as our training data for a latent diffusion model. The issue is that the number of SSL embeddings for an image size increases exponentially as we increase scale. This leads to significant computational overhead, primarily due to the quadratic complexity of cross-attention mechanisms used to condition diffusion models. Additionally, conditioning the generation of $256 \times 256$ images with a massive number of embeddings is redundant, given that if we have a total of 8 scales then we will be using a $128 \times 128 \times D$ condition to generate a single $256 \times 256 \times 3$ patch.
|
| 75 |
+
|
| 76 |
+
To address this issue, we introduce the idea of a learned cross-magnification latent space, shared across embeddings of all scales. To implement this, we train a "Summarizer" transformer, jointly with the diffusion denoiser, that processes the SSL embeddings extracted alongside every image. The information contained in the embeddings is "summarized" in conjunction with an embedding of the image scale, extracting the essential information needed by the LDM to synthesize patches accurately.
|
| 77 |
+
|
| 78 |
+
The variable number of tokens (embeddings) in the summarizer input is transformed into a fixed-sized set of conditioning tokens. We utilize padding and pooling to provide a fixed-size output with which we train the LDM. The magnification embedding added to the input makes the summarizer scale-aware, allowing it to adapt to the appropriate level of detail required at different scales. The output of the Summarizer then serves as conditioning input for the LDM, enabling the model to generate high-quality patches with scale-adaptive conditioning.
|
| 79 |
+
|
| 80 |
+
Conditioning Diffusion Model. Our image synthesis pipeline requires a set of SSL embeddings and the desired magnification level, which involves extracting the conditioning information from reference real large-images. This becomes impractical when direct access to training data is unavailable. To address this, we train a second diffusion model, the Conditioning Diffusion Model (CDM), which learns to sample from the distribution of the learned crossmagnification latent space after training the LDM.
|
| 81 |
+
|
| 82 |
+
Rather than training a diffusion model to model the distribution of the SSL embeddings, which is as complex as learning the distribution of images, we learn the output of the Summarizer, as it captures the most relevant information for synthesizing an image at a given magnifica
|
| 83 |
+
|
| 84 |
+
tion. This approach allows the CDM to model a more refined, task-specific latent space. By also conditioning the CDM on scale, we enable magnification-aware novel image synthesis, which we show can generate high-quality, nonmemorized images at the highest scale, even if the amount of data is incredibly scarce (2500 images at $0.15625 \times$ magnification).
|
| 85 |
+
|
| 86 |
+
# 3.2. Joint Multi-Scale Sampling
|
| 87 |
+
|
| 88 |
+
One of the biggest challenges in large-image domains is synthesizing images that contain local details and exhibit global consistency. Due to their immense sizes, we cannot directly train a model on the full gigapixel images, and training on individual scales will either lead to loss of detail or contextually incoherent results.
|
| 89 |
+
|
| 90 |
+
We propose a multi-scale training pipeline intrinsically motivated by the need to sample images from multiple scales jointly. By drawing samples jointly, we can balance the computational demands of generating large images by separating the global context generation, which is offset by synthesizing an image at a coarser scale, and the synthesis of fine local details, which is done at the lowest level.
|
| 91 |
+
|
| 92 |
+
We develop a joint multi-scale sampling approach that builds upon ZoomLDM's multi-scale nature and enables us to generate large images of up to $4096 \times 4096$ pixels. The key to our approach is providing 'self-guidance' to the model by guiding the generation of the lowest scales using the so-far-generated global context. To implement this guidance we build upon a recent diffusion inference algorithm [16], which enables fast conditional inference.
|
| 93 |
+
|
| 94 |
+
Inference Algorithm An image at scale $s + 1$ corresponds to four images at the previous scale $s$ since, during training, we downsample the large images by a factor of 2 at every scale. We want to jointly generate the four patches at the smaller scale $x_{i}^{s}$ , $i = 1, \dots, 3$ and the single image at the next level $x^{s + 1}$ . The relationship between these images is known; we can recover $x^{s + 1}$ by multiplying with a linear downsampling operator $A$ :
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\boldsymbol {x} ^ {s + 1} = \boldsymbol {A} \left( \begin{array}{l l} \boldsymbol {x} _ {1} ^ {s} & \boldsymbol {x} _ {2} ^ {s} \\ \boldsymbol {x} _ {3} ^ {s} & \boldsymbol {x} _ {4} ^ {s} \end{array} \right). \tag {1}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
We use the above matrix notation to denote the spatial arrangement of images. The algorithm proposed in [16] introduces a method to sample an image from a diffusion model given a linear constraint. Given that our multi-scale images are related by a linear constraint, we use a modified version of this algorithm to perform joint sampling across magnifications. We first provide a brief overview and then present the modifications necessary for joint multi-scale sampling.
|
| 101 |
+
|
| 102 |
+
Since we use an LDM, we perform the denoising in the VAE latent space and require the Dec and Enc networks to map from latents $\mathbf{z}$ to images $\mathbf{x}$ and back. The algorithm requires a linear operator $\mathbf{A}$ (and its transpose $\mathbf{A}^T$ ) and a
|
| 103 |
+
|
| 104 |
+
pixel-space measurement $\pmb{y}$ that we want our final sample $\pmb{z}_0$ to match, minimizing $C = ||\pmb{A}Dec(\pmb{z}_0) - \pmb{y}||_2^2$ . In every step $t$ of the diffusion process, the current noisy latent $\pmb{z}_t$ is used to estimate the final 'clean' latent $\hat{\pmb{z}}_0(\pmb{z}_t)$ , by applying the denoiser model $\epsilon_{\theta}(\pmb{z}_t)$ and Tweedie's formula [12]. In the typical DDIM [43] sampling process, the next diffusion step is predicted as
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\boldsymbol {z} _ {t - 1} = \sqrt {\bar {\alpha} _ {t}} \hat {\boldsymbol {z}} _ {0} (\boldsymbol {z} _ {t}) \sqrt {1 - \bar {\alpha} _ {t}} \boldsymbol {\epsilon} _ {\theta} (\boldsymbol {z} _ {t}) + \hat {\beta} _ {t} \boldsymbol {\epsilon} _ {t}. \tag {2}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
The algorithm of [16] proposes minimizing the $C(\boldsymbol{z}_t) = ||ADec(\hat{\boldsymbol{z}}_0(\boldsymbol{z}_t)) - \boldsymbol{y}||_2^2$ w.r.t. $\boldsymbol{z}_t$ at every timestep $t$ before performing the DDIM step. To do that it first computes an error direction as
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\boldsymbol {e} = \nabla \hat {\boldsymbol {z}} _ {0} \| \boldsymbol {A D e c} \left(\hat {\boldsymbol {z}} _ {0} \left(\boldsymbol {z} _ {t}\right)\right) - \boldsymbol {y} \| _ {2} ^ {2}. \tag {3}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
This error direction and the current noisy sample $\mathbf{z}_t$ are used to compute the gradient $\pmb{g} = \nabla_{\pmb{z}_t} C(\pmb{z}_t) = \nabla_{\pmb{z}_t} ||\pmb{A}\hat{\pmb{z}}_0(\pmb{z}_t) - \pmb{y}||_2^2$ using a finite difference approximation and the current noisy sample $\mathbf{z}_t$ is updated:
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\begin{array}{l} \boldsymbol {g} \approx [ \hat {\boldsymbol {z}} _ {0} (\boldsymbol {z} _ {t} + \delta \boldsymbol {e}) - \hat {\boldsymbol {z}} _ {0} (\boldsymbol {z} _ {t}) ] / \delta , (4) \\ \boldsymbol {z} _ {t} \leftarrow \boldsymbol {z} _ {t} + \lambda \boldsymbol {g}. (5) \\ \end{array}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
Efficient Joint Sampling We make two significant modifications to this algorithm to perform the joint multi-scale sampling. First, since we do not have access to a real measurement $\mathbf{y}$ , which corresponds to the higher scale image $\mathbf{x}^{s + 1}$ , we use the estimate of the image $Dec(\hat{\mathbf{z}}^{s + 1})$ to guide the generation of $z^s$ . Second, we propose a more efficient way of computing error direction (Eq. 3), which does not require memory and time-intensive backpropagations. To jointly sample images from scales $s + 4$ and $s$ we need to generate $16 \times 16 + 1$ total images, which would be infeasible with the previous error computation.
|
| 123 |
+
|
| 124 |
+
To avoid the backpropagation during (Eq. 3) we propose computing a numerical approximation of $e$ . Similar to Eq. 5 we utilize finite differences and compute
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\boldsymbol {e} \approx \left[ \operatorname {E n c} \left(\operatorname {D e c} \left(\hat {\boldsymbol {z}} _ {0}\right) + \zeta \boldsymbol {e} _ {\text {i m g}}\right) - \operatorname {E n c} \left(\operatorname {D e c} \left(\hat {\boldsymbol {z}} _ {0}\right)\right) \right] / \zeta \tag {6}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where $\pmb{e}_{img} = \pmb{A}^T (\pmb{A} \text{Dec}(\hat{\pmb{x}}_0(\pmb{x}_t)) - \pmb{y})$ . This eliminates the need to backpropagate through the decoder without significantly sacrificing the quality of the images generated. We provide a detailed background of the conditional inference algorithm and how our approximation reduces computation in the supplementary material.
|
| 131 |
+
|
| 132 |
+
# 4. Experiments
|
| 133 |
+
|
| 134 |
+
In this section, we showcase the experiments conducted to validate the effectiveness of our method. We train the unified latent diffusion model, ZoomLDM, on patches from eight different magnifications in histopathology. We
|
| 135 |
+
|
| 136 |
+
evaluate the quality of synthetic samples using both real and CDM-sampled conditions. Further, we exploit the multi-scale nature of ZoomLDM to demonstrate its strength in generating good quality high-resolution images across scales, and its utility in super-resolution (SR) and multiple instance learning (MIL) tasks.
|
| 137 |
+
|
| 138 |
+
# 4.1. Setup
|
| 139 |
+
|
| 140 |
+
# 4.1.1. Implementation details
|
| 141 |
+
|
| 142 |
+
We train the LDMs on 3 NVIDIA H100 GPUs, with a batch size 200 per GPU. We use the training code and checkpoints provided by [38]. Our LDM configuration consists of a VQ-f4 autoencoder and a U-Net model pre-trained on ImageNet. We set the learning rate at $10^{-4}$ with a warmup of 10,000 steps. The Summarizer is implemented as a 12-layer Transformer, modeled after ViT-Base. For the CDM, we train a Diffusion Transformer [34] on the outputs of the summarizer. We utilize DDIM sampling [43] with 50 steps for both models and apply classifier-free guidance [20] sampling with a scale of 2.0 to create synthetic images. See supplemental for more details on the Summarizer and CDM.
|
| 143 |
+
|
| 144 |
+
# 4.1.2. Dataset
|
| 145 |
+
|
| 146 |
+
We select 1,136 whole slide images (WSI) from TCGA-BRCA [4]. Using the code from DSMIL[27], we extract $256 \times 256$ patches at eight different magnifications: $20 \times$ , $10 \times$ , $5 \times$ , $2.5 \times$ , $1.25 \times$ , $0.625 \times$ , $0.3125 \times$ , and $0.15625 \times$ . Each patch is paired with its corresponding base resolution $(20 \times)$ region—for instance, a $256 \times 256$ pixel patch at $5 \times$ magnification is paired with a $1024 \times 1024$ pixel region at $20 \times$ . We then process the $20 \times$ regions through the UNI encoder [8] to produce an embedding matrix for each patch.
|
| 147 |
+
|
| 148 |
+
The dimensions of this embedding matrix vary based on the patch's magnification level. For example, a $5 \times$ patch corresponding to a $20 \times$ region of size $1024 \times 1024$ results in an embedding matrix of dimensions $4 \times 4 \times 1024$ . As discussed previously, to avoid redundancy in large embedding matrices, we average pool embeddings larger than $8 \times 8$ to $8 \times 8$ (magnifications $1.25 \times$ and lower).
|
| 149 |
+
|
| 150 |
+
In the supplementary, we also provide results for training ZoomLDM on satellite images. We use a similar training setting, replacing the WSIs from pathology with NAIP [44] tiles and the SSL encoder with DINO-v2 [33], showing the wider applicability of the proposed model.
|
| 151 |
+
|
| 152 |
+
# 4.2. Image quality
|
| 153 |
+
|
| 154 |
+
For every histopathology magnification, we generate 10,000 $256 \times 256$ px patches using ZoomLDM and evaluate their quality using the Fréchet Inception Distance (FID) [19]. For $20 \times$ , $10 \times$ and $5 \times$ magnifications, we compare against the state-of-the-art (SoTA) works of [17, 47]. For lower magnifications, we train standalone models specifically for patches from those magnifications, keeping the architecture
|
| 155 |
+
|
| 156 |
+
Table 1. FID of patches generated from ZoomLDM across different magnifications, compared with single magnification models. ZoomLDM achieved better FID scores than SoTA, with particularly significant improvements at lower scales.
|
| 157 |
+
|
| 158 |
+
<table><tr><td>Magnification</td><td>20×</td><td>10×</td><td>5×</td><td>2.5×</td><td>1.25×</td><td>0.625×</td><td>0.3125×</td><td>0.15625×</td></tr><tr><td># Training patches</td><td>12 Mil</td><td>3 Mil</td><td>750k</td><td>186k</td><td>57k</td><td>20k</td><td>7k</td><td>2.5k</td></tr><tr><td>ZoomLDM</td><td>6.77</td><td>7.60</td><td>7.98</td><td>10.73</td><td>8.74</td><td>7.99</td><td>8.34</td><td>13.42</td></tr><tr><td>SoTA</td><td>6.98 [17]</td><td>7.64 [47]</td><td>9.74 [17]</td><td>20.45</td><td>39.72</td><td>58.98</td><td>66.28</td><td>106.14</td></tr><tr><td>CDM</td><td>9.04</td><td>10.05</td><td>14.36</td><td>19.68</td><td>14.06</td><td>13.46</td><td>14.40</td><td>26.09</td></tr></table>
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
Figure 3. Large Images $(4096\times 4096)$ generated from ZoomLDM. Our large image generation framework is the first to generate 4k pathology images with local details and global consistency, all within reasonable inference time. We provide more 4k examples and comparisons in the supplementary.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
consistent with ZoomLDM.
|
| 168 |
+
|
| 169 |
+
As indicated in Table 1, ZoomLDM achieves superior performance across all magnifications compared to the SoTA models. We see larger improvements for magnifications below $2.5 \times$ , where the data scarcity severely impacts the model's ability to synthesize diverse, high-quality images. This highlights the advantage of our unified architecture and conditioning approach. By leveraging data and conditioning across all magnifications, we allow the low-density data regions to benefit from the insights that the model gains from the entire dataset, improving both model performance and efficiency.
|
| 170 |
+
|
| 171 |
+
Novel image synthesis: For FID comparisons above, images were generated by randomly sampling SSL embeddings for different magnifications from the dataset. However, this approach is not always practical as it requires access to the dataset of embeddings at all times. To address this, we use the Conditioning Diffusion Model to draw samples from the shared cross-magnification latent space and generate new images conditioned on these latents (CDM row in Table 1). Despite the slight increase in FID – an expected outcome since the CDM cannot perfectly capture the true learned conditioning latent space, we still observe that the generated samples outperform the baselines in the data-scarce settings. We believe that this further emphasizes the importance of our shared cross-magnification latent space,
|
| 172 |
+
|
| 173 |
+
by showing that we can model its distribution and capture all scales effectively. In supplementary we show synthetic images at $0.15625 \times$ and with their closest neighbors in the dataset to demonstrate the absence of memorization.
|
| 174 |
+
|
| 175 |
+
Table 2. CLIP and Crop FID values (lower is better) for our large image generation experiments. ZoomLDM outperforms previous works on $1024 \times 1024$ generation. While we lack in $4096 \times 4096$ FIDs, we provide qualitative examples in the supplementary that highlight the fundamental differences that emerge when scaling up the three methods. Inference time for a single image shows that our method is the only practical approach for 4k image generation.
|
| 176 |
+
|
| 177 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">1024 × 1024</td><td colspan="3">4096 × 4096</td></tr><tr><td>Time / img</td><td>CLIP FID</td><td>Crop FID</td><td>Time / img</td><td>CLIP FID</td><td>Crop FID</td></tr><tr><td>Graikos et al. [17]</td><td>60 s</td><td>7.43</td><td>15.51</td><td>4 h</td><td>2.75</td><td>11.30</td></tr><tr><td>∞-Brush [26]</td><td>30 s</td><td>3.74</td><td>17.87</td><td>12 h</td><td>2.63</td><td>14.76</td></tr><tr><td>ZoomLDM</td><td>28 s</td><td>1.23</td><td>14.94</td><td>8 m</td><td>6.75</td><td>18.90</td></tr></table>
|
| 178 |
+
|
| 179 |
+
# 4.3. Large image generation
|
| 180 |
+
|
| 181 |
+
In Section 3.2, we presented an algorithm for jointly sampling images at multiple scales. We perform experiments on generating $20 \times$ histopathology images jointly with other magnifications in two settings: Sampling $20 \times$ with $5 \times$ , generating $1024 \times 1024$ images and sampling $20 \times$ with
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
Figure 4. We showcase $4 \times$ super-resolution results ( $256 \times 256 \rightarrow 1024 \times 1024$ ). Samples generated by other methods [38, 48] exhibit artifacts, inconsistencies, and blurriness that are not present in our outputs. Specifically, in blue boxes, we can observe that CompVis[38] generates fine scale artifacts, while ControlNet[48] produces generally blurry outputs. ZoomLDM produces a sharp output, generating details generally consistent with the ground truth image.
|
| 185 |
+
|
| 186 |
+
Table 3. Super-resolution results on TCGA-BRCA [4] and BACH [1] using ZoomLDM and other diffusion-based baselines. Using ZoomLDM with the proposed condition inference achieves the best performance.
|
| 187 |
+
|
| 188 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Conditioning</td><td colspan="5">TCGA BRCA</td><td colspan="5">BACH</td></tr><tr><td>SSIM ↑</td><td>PSNR ↑</td><td>LPIPS↓</td><td>CONCH ↑</td><td>UNI ↑</td><td>SSIM ↑</td><td>PSNR ↑</td><td>LPIPS↓</td><td>CONCH ↑</td><td>UNI ↑</td></tr><tr><td>Bicubic</td><td>-</td><td>0.653</td><td>24.370</td><td>0.486</td><td>0.871</td><td>0.524</td><td>0.895</td><td>34.690</td><td>0.180</td><td>0.969</td><td>0.810</td></tr><tr><td>CompVis [38]</td><td>LR image</td><td>0.563</td><td>21.926</td><td>0.247</td><td>0.946</td><td>0.565</td><td>0.723</td><td>27.278</td><td>0.206</td><td>0.954</td><td>0.576</td></tr><tr><td>ControlNet [48]</td><td>LR image</td><td>0.543</td><td>21.980</td><td>0.252</td><td>0.874</td><td>0.563</td><td>0.780</td><td>27.339</td><td>0.276</td><td>0.926</td><td>0.721</td></tr><tr><td rowspan="3">ZoomLDM</td><td>Uncond</td><td>0.591</td><td>23.217</td><td>0.260</td><td>0.936</td><td>0.680</td><td>0.739</td><td>29.822</td><td>0.235</td><td>0.965</td><td>0.741</td></tr><tr><td>GT emb</td><td>0.599</td><td>23.273</td><td>0.250</td><td>0.946</td><td>0.672</td><td>0.732</td><td>29.236</td><td>0.245</td><td>0.974</td><td>0.753</td></tr><tr><td>Infer emb</td><td>0.609</td><td>23.407</td><td>0.229</td><td>0.957</td><td>0.719</td><td>0.779</td><td>30.443</td><td>0.173</td><td>0.974</td><td>0.808</td></tr></table>
|
| 189 |
+
|
| 190 |
+
$1.25 \times$ , giving $4096 \times 4096$ samples. We employ bicubic interpolation as the downsampling operator $A$ , where for $5 \times$ and $1.25 \times$ , we downsample by $4 \times$ and $16 \times$ , respectively.
|
| 191 |
+
|
| 192 |
+
In Table 2, we showcase CLIP FID and Crop FID values, adopted from [26], and compare our large-image generation method against existing state-of-the-art approaches. CLIP FID downsamples the full image and extracts features from a CLIP [37] model, whereas Crop FID extracts $256 \times 256$ crops from the large images and computes FID using the conventional Inception features [42].
|
| 193 |
+
|
| 194 |
+
On $1024 \times 1024$ generation we easily outperform existing approaches with similar or smaller sampling times. While, on $4096 \times 4096$ generation, we find that our method lags in two quality metrics but offers a reasonable inference time per image (8min vs $>4\mathrm{hrs}$ ). However, regarding the $4096 \times 4096$ results, we find fundamental differences between our synthesized images (Figure 3) and those of [17, 26] (see supplementary). We particularly find that the local patch-based model of Graikos et al. [17] completely fails to capture the global context in the generated images. While it generates great quality patches and stitches them
|
| 195 |
+
|
| 196 |
+
together over the $4096 \times 4096$ canvas, the overall image does not resemble a realistic pathology image. On the other hand, $\infty$ -Brush [26] captures the global image structures but produces blurry results. In contrast, ZoomLDM balances local details and global structure, producing images that not only exhibit high fidelity but also maintain overall realism across the entire $4096 \times 4096$ canvas. We are the first to generate $4k$ pathology images with both detail and global coherency under a tractable computational budget.
|
| 197 |
+
|
| 198 |
+
# 4.4. Super-resolution
|
| 199 |
+
|
| 200 |
+
Our joint multi-scale sampling allows us to sample multiple images from different magnifications simultaneously. However, a question arises of whether we could also use ZoomLDM in super-resolution, where the higher-scale image is given and the details need to be inferred. We provide a solution for super-resolution with ZoomLDM using a straightforward extension of our joint sampling algorithm.
|
| 201 |
+
|
| 202 |
+
The main challenge we need to overcome is the absence of conditioning. Given only an image at a magnification other than $20 \times$ , we cannot obtain SSL embeddings, which
|
| 203 |
+
|
| 204 |
+
are extracted from a $20 \times$ -specific encoder. Nevertheless, we discover an interesting inversion property of our model, which allows us to infer the conditioning given an image and its magnification. Similar to textual inversion [15], and more recently prompt tuning [10], we can optimize the SSL input to the summarizer to obtain a set of embeddings that generate images that resemble the one provided. We discuss the inversion approach in the supplementary material in more detail, along with inversion examples.
|
| 205 |
+
|
| 206 |
+
Once we have obtained a set of plausible conditioning embeddings, we can run our joint multi-scale sampling algorithm, fixing the measurement $y$ to the real image we want to super-resolve. To test ZoomLDM's capabilities, we construct a simple testbed of $4 \times$ super-resolution on in-distribution and out-of-distribution images from TCGA-BRCA and BACH [1] respectively. As baselines, we use bicubic interpolation, a naive super-resolution-specific LDM trained on OpenImages [25] (CompVis), and a ControlNet [48] trained on top of ZoomLDM.
|
| 207 |
+
|
| 208 |
+
In Table 3 and Figure 4, we present the results of our experiments. We find that SSIM and PSNR are slightly misleading as they favor the blurry bicubic images, but also point out some significant inconsistencies in the LDM and the ControlNet outputs. For better comparisons, we also compute LPIPS [49] and CONCH [28] similarity, which downsamples the image to $224 \times 224$ as well as UNI similarity, which we consider on a per $256 \times 256$ patch-level. In most perceptual metrics, we find ZoomLDM inference to be the best-performing while remaining faithful to the input image. Interestingly, we discover that using the embedding inversion that infers the conditions from the low-res given image performs better than providing the real embeddings.
|
| 209 |
+
|
| 210 |
+
Table 4. AUC for BRCA subtyping and HRD prediction. Features extracted from ZoomLDM outperform SoTA vision encoders.
|
| 211 |
+
|
| 212 |
+
<table><tr><td>Features</td><td>Mag</td><td>Subtyping</td><td>HRD</td></tr><tr><td>Phikon [14]</td><td>20×</td><td>93.81</td><td>76.88</td></tr><tr><td>UNI [8]</td><td>20×</td><td>94.09</td><td>81.79</td></tr><tr><td>CTransPath [45]</td><td>5×</td><td>93.11</td><td>85.37</td></tr><tr><td rowspan="3">ZoomLDM</td><td>20×</td><td>94.49</td><td>85.25</td></tr><tr><td>5×</td><td>94.09</td><td>86.26</td></tr><tr><td>Multi-scale (20× + 5×)</td><td>94.91</td><td>88.03</td></tr></table>
|
| 213 |
+
|
| 214 |
+
# 4.5. Multiple Instance Learning
|
| 215 |
+
|
| 216 |
+
Multiple instance learning (MIL) tasks benefit from multiscale information, as different magnifications reveal distinct and complementary features. Prior work [7, 27] that demonstrated this behavior required training separate encoders for each scale. We hypothesize that ZoomLDM offers an efficient solution by enabling seamless multi-scale feature extraction.
|
| 217 |
+
|
| 218 |
+
To validate this hypothesis, we utilize ZoomLDM as a feature extractor and apply a MIL approach for slide-level classification tasks of Breast cancer subtyping and Homologous Recombination Deficiency (HRD) prediction - both of which are binary classification tasks. For each patch in the WSI, we extract features from ZoomLDM's U-Net output block 3 at a fixed timestep $t = 100$ , conditioned on UNI embeddings. We employ a 10-fold cross-validation strategy for subtyping, consistent with the data splits from HIPT [7], and a 5-fold cross-validation for HRD prediction, reporting performance on a held-out test split as per SI-MIL [24]. We compare ZoomLDM's features to those from SoTA encoders—Phikon [14], CTransPath [45], and UNI [8], using the ABMIL method [22, 23].
|
| 219 |
+
|
| 220 |
+
As expected, the results in Table 4 show that ZoomLDM's multi-scale features (fusing $20 \times$ and $5 \times$ outperform SoTA encoders in both tasks. This improvement highlights the effectiveness of ZoomLDM's crossmagnification latent space in capturing multi-scale dependencies. Surprisingly, even in a single magnification setting, ZoomLDM outperforms all SoTA encoders. This result suggests that by learning to generate across scales, ZoomLDM learns to produce features that can be aware of the cross-magnification long-range dependencies, and therefore exceed the capabilities of those produced by SSL encoders for downstream MIL tasks.
|
| 221 |
+
|
| 222 |
+
# 5. Conclusion
|
| 223 |
+
|
| 224 |
+
We presented ZoomLDM, the first conditional diffusion model capable of generating images across multiple scales with state-of-the-art synthetic image quality. By introducing a cross-magnification latent space, implemented with a trainable summarizer module, ZoomLDM effectively captures dependencies across magnifications. Our novel joint multi-scale sampling approach allows for efficient generation of large, high-quality and structurally coherent histopathology images up-to $4096 \times 4096$ pixels while preserving both global structure and fine details.
|
| 225 |
+
|
| 226 |
+
In addition to synthesis, ZoomLDM demonstrates its utility as a powerful feature extractor in multiple instance learning experiments. The multi-scale representations learned by our model outperform SoTA SSL encoders in slide-level classification tasks, enabling more accurate subtyping, prognosis prediction, and biomarker identification. Furthermore, our Condition Diffusion Model demonstrates the potential to integrate diverse input sources such as text or RNA sequences, paving the way for realistic synthetic datasets for training and evaluating pathologists as well as controlled datasets for quality assurance. ZoomLDM is a step toward achieving foundation generative models in histopathology, with the potential to shed light on tumor heterogeneity, refine cancer gradings, and enrich our understanding of cancer's various manifestations.
|
| 227 |
+
|
| 228 |
+
Acknowledgements This research was partially supported by NSF grants IIS-2123920, IIS-2212046, NIH grants 1R01CA297843-01, 3R21CA258493-02S1 and NCI awards 1R21CA25849301A1, UH3CA225021.
|
| 229 |
+
|
| 230 |
+
# References
|
| 231 |
+
|
| 232 |
+
[1] Guilherme Aresta, Teresa Araujo, Scotty Kwok, Sai Saketh Chennamsetty, Mohammed Safwan, Varghese Alex, Bahram Marami, Marcel Prastawa, Monica Chan, Michael Donovan, Gerardo Fernandez, Jack Zeineh, Matthias Kohl, Christoph Walz, Florian Ludwig, Stefan Braunewell, Maximilian Baust, Quoc Dang Vu, Minh Nguyen Nhat To, Eal Kim, Jin Tae Kwak, Sameh Galal, Veronica Sanchez-Freire, Nadia Brancati, Maria Frucci, Daniel Riccio, Yaqi Wang, Lingling Sun, Kaiqiang Ma, Jiannan Fang, Ismael Kone, Lahsen Boulmane, Aurélio Campilho, Catarina Eloy, António Polónia, and Paulo Aguiar. Bach: Grand challenge on breast cancer histology images. Medical Image Analysis, 56:122-139, 2019. 7, 8
|
| 233 |
+
[2] Marco Aversa, Gabriel Nobis, Miriam Hagele, Kai Standvoss, Mihaela Chirica, Roderick Murray-Smith, Ahmed Alaa, Lukas Ruff, Daniela Ivanova, Wojciech Samek, et al. Diffinfinite: Large mask-image synthesis via parallel random patch diffusion in histopathology. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023. 3
|
| 234 |
+
[3] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn.openuai.com/papers/dall-e-3.pdf, 2(3):8, 2023. 1
|
| 235 |
+
[4] JN Cancer Genome Atlas Research Network et al. The cancer genome atlas pan-cancer analysis project. Nat. Genet, 45 (10):1113-1120, 2013. 5, 7
|
| 236 |
+
[5] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pretraining to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558-3568, 2021. 1
|
| 237 |
+
[6] Junsong Chen, Chongjian Ge, Enze Xie, Yue Wu, Lewei Yao, Xiaozhe Ren, Zhongdao Wang, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart- $\sigma$ : Weak-to-strong training of diffusion transformer for 4k text-to-image generation, 2024. 3
|
| 238 |
+
[7] Richard J Chen, Chengkuan Chen, Yicong Li, Tiffany Y Chen, Andrew D Trister, Rahul G Krishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16144-16155, 2022. 2, 3, 8
|
| 239 |
+
[8] Richard J Chen, Tong Ding, Ming Y Lu, Drew FK Williamson, Guillaume Jaume, Bowen Chen, Andrew Zhang, Daniel Shao, Andrew H Song, Muhammad Shaban, et al. A general-purpose self-supervised model for computational pathology. arXiv preprint arXiv:2308.15474, 2023. 5, 8
|
| 240 |
+
[9] Richard J Chen, Tong Ding, Ming Y Lu, Drew FK Williamson, Guillaume Jaume, Bowen Chen, Andrew
|
| 241 |
+
|
| 242 |
+
Zhang, Daniel Shao, Andrew H Song, Muhammad Shaban, et al. Towards a general-purpose foundation model for computational pathology. Nature Medicine, 2024. 2
|
| 243 |
+
[10] Hyungjin Chung, Jong Chul Ye, Peyman Milanfar, and Mauricio Delbracio. Prompt-tuning latent diffusion models for inverse problems. In *Forty-first International Conference on Machine Learning*, 2024. 8
|
| 244 |
+
[11] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 2
|
| 245 |
+
[12] Bradley Efron. Tweedie's formula and selection bias. Journal of the American Statistical Association, 106(496):1602-1614, 2011. 5
|
| 246 |
+
[13] Miguel Espinosa and Elliot J Crowley. Generate your own scotland: Satellite image generation conditioned on maps. arXiv preprint arXiv:2308.16648, 2023. 3
|
| 247 |
+
[14] Alexandre Filiot, Ridouane Ghermi, Antoine Olivier, Paul Jacob, Lucas Fidon, Alice Mac Kain, Charlie Saillard, and Jean-Baptiste Schiratti. Scaling self-supervised learning for histopathology with masked image modeling. medRxiv, pages 2023-07, 2023. 8
|
| 248 |
+
[15] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit Haim Bermano, Gal Chechik, and Daniel Cohen-or. An image is worth one word: Personalizing text-to-image generation using textual inversion. In The Eleventh International Conference on Learning Representations, 2023. 8
|
| 249 |
+
[16] Alexandros Graikos, Nebojsa Jojic, and Dimitris Samaras. Fast constrained sampling in pre-trained diffusion models. arXiv preprint arXiv:2410.18804, 2024. 4, 5
|
| 250 |
+
[17] Alexandros Graikos, Srikar Yellapragada, Minh-Quan Le, Saarthak Kapse, Prateek Prasanna, Joel Saltz, and Dimitris Samaras. Learned representation-guided diffusion models for large-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8532–8542, 2024. 1, 2, 3, 5, 6, 7
|
| 251 |
+
[18] Robert Harb, Thomas Pock, and Heimo Müller. Diffusion-based generation of histopathological whole slide images at a gigapixel scale. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 5131-5140, 2024. 2, 3
|
| 252 |
+
[19] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 5
|
| 253 |
+
[20] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 1, 2, 5
|
| 254 |
+
[21] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 2
|
| 255 |
+
[22] Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In International conference on machine learning, pages 2127-2136. PMLR, 2018. 8
|
| 256 |
+
[23] Jakub R Kaczmarzyk, Joel H Saltz, and Peter K Koo. Explainable ai for computational pathology identifies model limitations and tissue biomarkers. ArXiv, pages arXiv-2409, 2024. 8
|
| 257 |
+
|
| 258 |
+
[24] Saarthak Kapse, Pushpak Pati, Srijan Das, Jingwei Zhang, Chao Chen, Maria Vakalopoulou, Joel Saltz, Dimitris Samaras, Rajarsi R Gupta, and Prateek Prasanna. Si-mil: Taming deep mil for self-interpretability in gigapixel histopathology. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11226–11237, 2024. 8
|
| 259 |
+
[25] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. IJCV, 2020. 8
|
| 260 |
+
[26] Minh-Quan Le, Alexandros Graikos, Srikar Yellapragada, Rajarsi Gupta, Joel Saltz, and Dimitris Samaras. $\infty$ -brush: Controllable large image synthesis with diffusion models in infinite dimensions, 2024. 1, 2, 3, 6, 7
|
| 261 |
+
[27] Bin Li, Yin Li, and Kevin W Eliceiri. Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318-14328, 2021. 2, 5, 8
|
| 262 |
+
[28] Ming Y Lu, Bowen Chen, Drew FK Williamson, Richard J Chen, Ivy Liang, Tong Ding, Guillaume Jaume, Igor Odintsov, Long Phi Le, Georg Gerber, et al. A visual-language foundation model for computational pathology. Nature Medicine, 30:863-874, 2024. 8
|
| 263 |
+
[29] Puria Azadi Moghadam, Sanne Van Dalen, Karina C Martin, Jochen Lennerz, Stephen Yip, Hossein Farahani, and Ali Bashashati. A morphology focused diffusion probabilistic model for synthesis of histopathology images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2000-2009, 2023. 3
|
| 264 |
+
[30] Gustav Müller-Franzes, Jan Moritz Niehues, Firas Khader, Soroosh Tayebi Arasteh, Christoph Haarburger, Christiane Kuhl, Tianci Wang, Tianyu Han, Teresa Nolte, Sven Nebelung, et al. A multimodal comparison of latent denoising diffusion probabilistic models and generative adversarial networks for medical image synthesis. Scientific Reports, 13 (1):12098, 2023. 3
|
| 265 |
+
[31] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162-8171. PMLR, 2021. 2
|
| 266 |
+
[32] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning, pages 16784-16804. PMLR, 2022. 1
|
| 267 |
+
[33] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 5
|
| 268 |
+
[34] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF Inter-
|
| 269 |
+
|
| 270 |
+
national Conference on Computer Vision, pages 4195-4205, 2023. 5
|
| 271 |
+
[35] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 3
|
| 272 |
+
[36] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. SDXL: Improving latent diffusion models for high-resolution image synthesis. In The Twelfth International Conference on Learning Representations, 2024. 1
|
| 273 |
+
[37] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 7
|
| 274 |
+
[38] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 5, 7
|
| 275 |
+
[39] Chitwan Sahara, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4713-4726, 2022. 3
|
| 276 |
+
[40] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278-25294, 2022. 1
|
| 277 |
+
[41] Ahmad Sebaq and Mohamed ElHelw. Rsdiff: Remote sensing image generation from text using diffusion model. arXiv preprint arXiv:2309.02455, 2023. 3
|
| 278 |
+
[42] Maximilian Seitzer. pytorch-fid: FID Score for PyTorch. https://github.com/mseitzer/pytorch-fid, 2020. Version 0.3.0.7
|
| 279 |
+
[43] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2020. 3, 5
|
| 280 |
+
[44] USGS. National agriculture imagery program (NAIP), 2023. https://www.usgs.gov/centers/eros/science/usgs-eros-archive-aerial-photography - national - agriculture - imagery-program-naip.5
|
| 281 |
+
[45] Xiyue Wang, Sen Yang, Jun Zhang, Minghui Wang, Jing Zhang, Junzhou Huang, Wei Yang, and Xiao Han. Transpath: Transformer-based self-supervised learning for histopathological image classification. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2021: 24th International Conference, Strasbourg, France, September 27-October 1, 2021, Proceedings, Part VIII 24, pages 186-195. Springer, 2021. 8
|
| 282 |
+
[46] Xuan Xu, Saarthak Kapse, Rajarsi Gupta, and Prateek Prasanna. Vit-dae: Transformer-driven diffusion autoen
|
| 283 |
+
|
| 284 |
+
coder for histopathology image analysis. arXiv preprint arXiv:2304.01053, 2023. 3
|
| 285 |
+
[47] Srikar Yellapragada, Alexandros Graikos, Prateek Prasanna, Tahsin Kurc, Joel Saltz, and Dimitris Samaras. Pathldm: Text conditioned latent diffusion model for histopathology. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 5182-5191, 2024. 2, 3, 5, 6
|
| 286 |
+
[48] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023. 7, 8
|
| 287 |
+
[49] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. 8
|
CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1c2d8cb04445281b3e8de93d4ab82acea48e4f8fa5d3a17b473215099aadba49
|
| 3 |
+
size 644614
|
CVPR/2025/ZoomLDM_ Latent Diffusion Model for Multi-scale Image Generation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d3026f01f03b6369db91eaefa3c9079d285f336b5a9e7e133857e878b71fad92
|
| 3 |
+
size 402865
|
CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/d97dc35c-1298-4ebe-af81-4eec695a335e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bef4d2d0a96678b10297876e87bae105bf0dfb13d51f371c3e9f47bfe8881a9c
|
| 3 |
+
size 92297
|
CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/d97dc35c-1298-4ebe-af81-4eec695a335e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c497bc4cc6aa8f38b53893acf10a942067de0097d2c8d27847035cbcf6bf869c
|
| 3 |
+
size 111890
|
CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/d97dc35c-1298-4ebe-af81-4eec695a335e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8e784fbbfae82a183d2aae5aaf61620fcd3bab542aaaba0dc5072deec898a7c6
|
| 3 |
+
size 2667772
|
CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/full.md
ADDED
|
@@ -0,0 +1,423 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# $\beta$ -FFT: Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation
|
| 2 |
+
|
| 3 |
+
Ming Hu $^{1,2}$
|
| 4 |
+
Jianfu Yin $^{1,2}$
|
| 5 |
+
Zhuangzhuang Ma $^{3*}$
|
| 6 |
+
Jianheng Ma $^{3*}$
|
| 7 |
+
Feiyu Zhu $^{1,2}$
|
| 8 |
+
Bingbing Wu $^{1,2}$
|
| 9 |
+
Ya Wen $^{4}$
|
| 10 |
+
Meng Wu $^{5}$
|
| 11 |
+
Cong Hu $^{5,6\dagger}$
|
| 12 |
+
Bingliang Hu $^{1\dagger}$
|
| 13 |
+
Quan Wang $^{1\dagger}$ $^{1}$ Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences
|
| 14 |
+
$^{2}$ University of Chinese Academy of Sciences
|
| 15 |
+
$^{3}$ Xidian University
|
| 16 |
+
$^{4}$ Xi'an University of Technology
|
| 17 |
+
$^{5}$ Zhongnan Hospital of Wuhan University
|
| 18 |
+
$^{6}$ The First Affiliated Hospital of Guangxi Medical University
|
| 19 |
+
|
| 20 |
+
# Abstract
|
| 21 |
+
|
| 22 |
+
Co-training has achieved significant success in the field of semi-supervised learning(SSL); however, the homogenization phenomenon, which arises from multiple models tending towards similar decision boundaries, remains inadequately addressed. To tackle this issue, we propose a novel algorithm called $\beta$ -FFT from the perspectives of data processing and training structure. In data processing, we apply diverse augmentations to input data and feed them into two sub-networks. To balance the training instability caused by different augmentations during consistency learning, we introduce a nonlinear interpolation technique based on the Fast Fourier Transform (FFT). By swapping low-frequency components between variously augmented images, this method not only generates smooth and diverse training samples that bridge different augmentations but also enhances the model's generalization capability while maintaining consistency learning stability. In training structure, we devise a differentiated training strategy to mitigate homogenization in co-training. Specifically, we use labeled data for additional training of one model within the co-training framework, while for unlabeled data, we employ linear interpolation based on the $\mathrm{Beta}(\beta)$ distribution as a regularization technique in additional training. This approach allows for more efficient utilization of limited labeled data and simultaneously improves the model's performance on unlabeled data, optimizing overall system performance. Code is available at the following link. https://github.com/Xi-Mu-Yu/beta-FFT.
|
| 23 |
+
|
| 24 |
+
# 1. Introduction
|
| 25 |
+
|
| 26 |
+
As manually annotating medical images such as CT, MRI, and pathology images is both costly and labor-intensive,
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
Figure 1. Illustrating the architectures for (a) FixMatch, (b) MeanTeacher, (c) co-training, and (d) our approach $\beta$ -FFT. $X$ represents the dataset, $P$ is the confidence map, $Y$ denotes the labels encoded in one-hot format. More details in the method section.
|
| 30 |
+
|
| 31 |
+
this process becomes increasingly challenging for radiologists and other medical professionals as data volumes continue to grow, leading to scalability issues. Consequently, semi-supervised semantic segmentation has become particularly important in medical image analysis.
|
| 32 |
+
|
| 33 |
+
The research on semi-supervised learning(SSL) began with the self-training method [11, 30], which enhances model learning by leveraging unlabeled data. Initially, researchers aimed to achieve this by self-generating labels, but this often led to unreliable results when handling unlabeled data[1]. To address this issue, researchers introduced consistency regularization methods[24, 31], usually applied by enforcing perturbations on unlabeled data in an online fashion, helping the model maintain stability when facing data variations.
|
| 34 |
+
|
| 35 |
+
As research progressed, data augmentation's role in bolstering model robustness has become evident. The Fix-
|
| 36 |
+
|
| 37 |
+
Match method[32] optimizes semi-supervised learning by leveraging pseudo-labeling and consistency regularization on augmented unlabeled data (Fig.1a). Another notable approach in medical image semi-supervised segmentation is the Mean Teacher architecture[33], which employs an exponential moving average (EMA) of the student model's weights for the teacher network(Fig.1b). Inspired by this framework, various methods have been developed to enhance semi-supervised segmentation. For example, the UA-MT framework[45] utilizes uncertainty information to guide the student model in learning reliable targets. Verma et al.[34] introduced an Interpolation Consistency Training Framework to ensure that the predictions of interpolated unlabeled points align with actual data points. URPC[20] guarantees consistent predictions across scales. The method BCP[3] further enhances the Mean Teacher architecture by bi-directionally copying labeled and unlabeled data, allowing the unlabeled data to learn shared semantics from the labeled data and addressing the empirical mismatch in semi-supervised medical image segmentation.
|
| 38 |
+
|
| 39 |
+
While these approaches significantly enhance semi-supervised learning, they also pose challenges. The close coupling between teacher and student models[13] can hinder effective knowledge transfer, limiting the teacher's capacity to convey valuable insights. This coupling may lead to confirmation bias, causing models to excessively rely on existing biases and overlook potential new information. To overcome the limitations associated with the coupling in teacher-student models, researchers introduced the co-training method[13, 25]. Co-training leverages the complementary characteristics of multiple models to enhance knowledge sharing and transfer, enabling each model to gain additional knowledge from others, thus effectively improving the performance of SSL (Fig.1c).
|
| 40 |
+
|
| 41 |
+
Although co-training has achieved significant success in SSL, the risk of model homogenization remains a critical challenge[16]. Specifically, multiple models tend to converge to similar decision boundaries[1, 16], leading to homogenization. This phenomenon reduces the diversity of learned representations, thereby limiting the models' generalization capacity in semi-supervised settings. This raises a crucial question: Can we introduce additional information or corrective mechanisms to maintain model accuracy while reducing homogenization between models in co-training, thereby enhancing overall performance?
|
| 42 |
+
|
| 43 |
+
To mitigate the issue of model homogenization in constraining, we introduced improvements from two aspects: data processing and network training strategies.
|
| 44 |
+
|
| 45 |
+
Firstly, in terms of data processing, we incorporated diverse data augmentation techniques. By generating augmented samples with different intensities (including strong and weak augmentations) and training different subnetworks separately, we achieved model differentiation at the
|
| 46 |
+
|
| 47 |
+
data level. Meanwhile, to prevent training instability caused by varying augmentation strengths, we employed the Fast Fourier Transform (FFT) to exchange low-frequency information between strongly and weakly augmented images, thereby creating new samples that lie between the two. This approach not only provides the model with new data perspectives but also effectively alleviates the instability caused by strong and weak augmentations.
|
| 48 |
+
|
| 49 |
+
Secondly, in terms of training strategies, we applied an additional training step to one of the subnetworks, while the other followed the original training process. We utilized labeled data for extra training on the selected model and introduced unlabeled data, generated through linear interpolation based on Beta distribution sampling, as a regularization term in the training process. This strategy not only helps maintain diversity in decision boundaries between the two models and reduces their homogenization but also further enhances the effectiveness of collaborative learning. We performed additional supervised training on Student Model 1 using labeled data, which not only enhances the performance of Student Model 1 but also helps to increase its independence. This approach aids in mitigating the confirmation bias between Student Model 1 and the Teacher network. To summarize, we make the following contributions:
|
| 50 |
+
|
| 51 |
+
1. Nonlinear Interpolation Strategy: By using the Fast Fourier Transform to exchange low-frequency information between weakly and strongly augmented images, we effectively mitigated the training instability caused by using different augmented data to address model homogenization. At the same time, this approach enriched the diversity of data samples and improved the model's generalization ability.
|
| 52 |
+
2. Dehomogenization during training: Student model 1 undergoes additional training, while Student model 2 does not. This differentiated training approach generates a unique loss for each model, effectively reducing homogenization between the two student models and preserving the diversity of their decision boundaries.
|
| 53 |
+
|
| 54 |
+
# 2. Related Work
|
| 55 |
+
|
| 56 |
+
# 2.1. Semi-Supervised Medical Image Segmentation
|
| 57 |
+
|
| 58 |
+
Previous methods can be broadly categorized into self-training methods[6, 49] and consistency regularization methods[8, 13]. Self-training algorithms are considered the fundamental prototype of pseudo-labeling methods[28], where a model is pre-trained on a labeled dataset and iteratively retrained or fine-tuned using predictions from unlabeled data. [23] matched these pseudo-labels by synthesizing new images rather than optimizing them. Within the framework of consistency regularization, [12] employed strong augmentation and weak augmentation to handle unlabeled data. Some studies have explored consistent data
|
| 59 |
+
|
| 60 |
+
transformations, such as patch shuffling data transformation [15], cut-and-paste augmentation [44], and copy-paste [3]. ABD[9] effectively integrates multiple perturbations through an adaptive bidirectional displacement mechanism, enhancing the quality of consistency learning. AD-MT[48] reduces confirmation bias and enhances model performance under limited labeled data by employing random periodic alternation and a counteracting disturbance module.
|
| 61 |
+
|
| 62 |
+
# 2.2. Frequency Domain Enhancement Techniques
|
| 63 |
+
|
| 64 |
+
Fourier domain processing techniques enhance model generalization, robustness, and adaptability in computer vision tasks, particularly for domain adaptation and data augmentation. Fourier Domain Adaptation (FDA) [43] replaces the low-frequency amplitude spectrum of source images with that of target images, enabling model adaptation to new domains while preserving structural information. The Fourier-based Domain Generalization Framework [41] systematically investigates the roles of amplitude and phase spectra in domain shifts, revealing that the amplitude spectrum captures domain-specific style information, while the phase spectrum retains structural content. This insight underpins frequency-based augmentation strategies for improved generalization. FreMix [40] performs frequency-based augmentation by mixing amplitude spectra of different images, thereby enhancing domain generalization.
|
| 65 |
+
|
| 66 |
+
# 2.3. Research Status of Homogenization
|
| 67 |
+
|
| 68 |
+
In the co-training framework of semi-supervised learning, the issue of model homogenization has become a core challenge that restricts performance improvement. To address this problem, researchers have proposed systematic solutions from three levels: data augmentation, model architecture, and training strategies. On the level of data augmentation, consistency training based on strong-weak augmentation combinations[9, 16, 27, 32] generates multi-view samples through differentiated disturbances, while the contrastive learning framework[14] further utilizes graph structures to dynamically allocate samples, enhancing complementarity among models. Recent work [26] also minimizes mutual information to constrain the independence of view features, reducing redundancy. On the model architecture level, heterogeneous network designs (such as combinations of CNN and Transformer[16, 22]) and model parameter diversification[10] are used to force models to focus on different feature patterns. On the training strategy level, dynamic optimization methods[39], asymmetric learning mechanisms (such as alternating training[48]) have been proven to effectively prevent model convergence.
|
| 69 |
+
|
| 70 |
+
# 3. Method
|
| 71 |
+
|
| 72 |
+
In semi-supervised segmentation, we aim to train a model using both labeled and unlabeled data. The labeled dataset
|
| 73 |
+
|
| 74 |
+
$\mathcal{D}^l = \left\{(X_i^l,Y_i^l)\right\}_{i = 1}^N$ contains $N$ labeled images, where $X_{i}^{l}$ is the image and $Y_{i}^{l}$ is its corresponding segmentation label. The unlabeled dataset $\mathcal{D}^u = \left\{X_j^u\right\}_{j = 1}^M$ consists of $M$ unlabeled images, where $X_{j}^{u}$ has no associated label. Typically, $N\ll M$ , meaning the number of labeled images is much smaller than the unlabeled ones.
|
| 75 |
+
|
| 76 |
+
In our approach, we employ a single teacher model alongside two student models. The parameters of the teacher model are updated using an Exponential Moving Average (EMA) mechanism, specifically tuned based on the parameters of Student Model 1. The update process for the teacher model at each iteration can be expressed as:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\theta_ {T} ^ {(t)} = \lambda \theta_ {T} ^ {(t - 1)} + (1 - \lambda) \theta_ {S 1} ^ {(t)} \tag {1}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
Here, $\theta_T^{(t)}$ denotes the parameters of the teacher model at the $t$ -th iteration, $\theta_{S1}^{(t)}$ represents the parameters of Student Model 1 at the same iteration, and $\lambda \in [0,1]$ serves as a smoothing factor that balances the influence of previous teacher parameters against those of Student Model 1.
|
| 83 |
+
|
| 84 |
+
# 3.1. Overview
|
| 85 |
+
|
| 86 |
+
1. Data Augmentation and Teacher Network Training via Copy-Paste We apply both weak and strong data augmentations to the data, enhancing its diversity through simple transformations and advanced techniques. Additionally, we utilize a Copy-Paste method to train a teacher network, ensuring high-quality pseudolabels for the unlabeled data. This approach effectively improves the accuracy of the generated pseudo-labels.
|
| 87 |
+
2. Nonlinear Interpolation: To reduce homogenization in collaborative training and increase data diversity, we input data with varying degrees of enhancement (strong and weak) into different sub-models. We also exchange low-frequency components between weakly and strongly enhanced images to reduce the instability of collaborative training.
|
| 88 |
+
3. Differentiated Training: One model in the co-training framework undergoes additional training, while the other does not, reducing homogeneity between them.
|
| 89 |
+
|
| 90 |
+
# 3.2. Data Augmentation and Teacher Network Training via Copy-Paste
|
| 91 |
+
|
| 92 |
+
We begin by applying both strong(s) and weak(w) augmentations to the data. Weak augmentations include simple transformations such as rotation and flipping, while strong augmentations build on these with techniques like Cutout [7] and ColorJitter[42]. Inspired by BCP [3], we employ a Copy-Paste technique for further data enhancement. Specifically, the Copy-Paste process can be expressed as follows:
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
X _ {w / s} ^ {i n} = M \odot X _ {w / s} ^ {l} + (1 - M) \odot X _ {w / s} ^ {u}, \tag {2}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
X _ {w / s} ^ {\text {o u t}} = M \odot X _ {w / s} ^ {u} + (1 - M) \odot X _ {w / s} ^ {l}, \tag {3}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+

|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
|
| 106 |
+

|
| 107 |
+
Figure 2. Overview of our $\beta$ -FFT framework. In the figure, $X$ represents the data, $l$ represents labeled data, $u$ represents unlabeled data, $w$ denotes weak augmentation, and $s$ denotes strong augmentation. $P$ represents the confidence map obtained from the data through the model, $\tilde{Y}$ represents the class prediction map corresponding to the confidence map obtained through the model, and $Y$ represents the corresponding ground truth labels. FFT represents the Fast Fourier Transform, and iFFT represents the Inverse Fast Fourier Transform.
|
| 108 |
+
|
| 109 |
+
Here, $X_{w / s}^{l}$ represents labeled data, and $X_{w / s}^{u}$ represents unlabeled data. We utilize a mask $M \in \{0,1\}^{W \times H}$ to perform the bidirectional copy-pasting operation, controlling the blending between the images. The mask defines a zero-value region of size $\eta H \times \eta W$ , where $\eta \in (0,1)$ governs the proportion of the foreground region.
|
| 110 |
+
|
| 111 |
+
To ensure high-quality predictions for the unlabeled data, we pre-train a teacher model $f_{\theta_T}$ using two different labeled data, $l_1$ and $l_2$ , as follows:
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
X _ {w / s} ^ {\text {T e a c h e r}} = M \odot X _ {w / s} ^ {l _ {1}} + (1 - M) \odot X _ {w / s} ^ {l _ {2}}, \tag {4}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
Y _ {w / s} ^ {\text {T e a c h e r}} = M \odot Y _ {w / s} ^ {l _ {1}} + (1 - M) \odot Y _ {w / s} ^ {l _ {2}}. \tag {5}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
Thus, the labels corresponding to the unlabeled data $X_{w / s}^{u}$ are given by:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
Y _ {w / s} ^ {u} = \underset {c \in C} {\operatorname {a r g m a x}} \left(f _ {\theta_ {T}} \left(X _ {w / s} ^ {u}\right), c\right). \tag {6}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
Accordingly, the labels for $X_{w / s}^{in}$ and $X_{w / s}^{out}$ can be expressed as:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
Y _ {w / s} ^ {i n} = M \odot Y _ {w / s} ^ {l} + (1 - M) \odot Y _ {w / s} ^ {u}, \tag {7}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
Y _ {w / s} ^ {\text {o u t}} = M \odot Y _ {w / s} ^ {u} + (1 - M) \odot Y _ {w / s} ^ {l}. \tag {8}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
# 3.3. Non-linear Interpolation
|
| 138 |
+
|
| 139 |
+
We utilize a non-linear interpolation technique based on the exchange of low-frequency components to enhance the diversity of data samples, particularly for image augmentation. Our approach begins by decomposing an image $I$ into its frequency components using the FFT:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
I = F ^ {- 1} (F (I)) \tag {9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $F$ denotes the FFT and $F^{-1}$ represents its inverse, the Inverse Fast Fourier Transform (iFFT). Subsequently,
|
| 146 |
+
|
| 147 |
+
we extract the low-frequency component $I \to \mathrm{low}$ of the image, defined as:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
I ^ {\rightarrow \text {l o w}} = F ^ {- 1} (F (I) \cdot H) \tag {10}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
$H$ is a low-pass filter employed to isolate low-frequency information. We conducted experimental investigations on the setting of $\mathrm{H}$ in the experimental section.
|
| 154 |
+
|
| 155 |
+
To enhance the diversity of the augmented images, we then perform non-linear interpolation by swapping the low-frequency components of the weakly augmented image $I_{w}$ with those of the strongly augmented image $I_{s}$ . This generates two new images $I_{w}^{- > F}$ and $I_{s}^{- > F}$ :
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
I _ {w / s} ^ {- > F} = I _ {w / s} - I _ {w / s} ^ {- > l o w} + I _ {s / w} ^ {- > l o w} \tag {11}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
Next, to further increase the diversity of the augmented images, we perform non-linear interpolation by swapping the low-frequency components of the weakly augmented images $X_w^{in}$ and $X_s^{in}$ with those of the strongly augmented images $X_w^{out}$ and $X_s^{out}$ . This process generates four new images, defined as:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
X _ {w / s} ^ {\text {i n} / \text {o u t} \rightarrow F} = X _ {w / s} ^ {\text {i n} / \text {o u t}} - X _ {w / s} ^ {\text {i n} / \text {o u t} \rightarrow \text {l o w}} + X _ {s / w} ^ {\text {i n} / \text {o u t} \rightarrow \text {l o w}} \tag {12}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
We aim for different models to produce similar outputs across various samples to ensure they learn consistent feature representations. This consistency enables the models to maintain strong performance on unseen samples, thereby enhancing their generalization ability. To strengthen this effect, we concatenate the four images generated through non-linear interpolation into a new set $X_{\mathrm{input}}^{F}$ :
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
X _ {\text {i n p u t}} ^ {F} = \operatorname {C o n c a t} \left[ X _ {w} ^ {\text {i n} \rightarrow F}, X _ {s} ^ {\text {i n} \rightarrow F}, X _ {w} ^ {\text {o u t} \rightarrow F}, X _ {s} ^ {\text {o u t} \rightarrow F} \right] \tag {13}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
# 3.4. Differentiated Training of Student Models
|
| 174 |
+
|
| 175 |
+
Student Model 1 is additionally trained using labeled data, whereas Student Model 2 does not undergo this supplementary training. To enhance the robustness of Student Model 1, we incorporate two regularization techniques: linear interpolation consistency regularization and noise interpolation consistency regularization.
|
| 176 |
+
|
| 177 |
+
# 3.4.1. Linear Interpolation Consistency Regularization
|
| 178 |
+
|
| 179 |
+
Student Model 1 employs a pixel-wise data perturbation strategy along with consistency regularization that leverages unlabeled data. Given two unlabeled data points, $X^{u_1}$ and $X^{u_2}$ , we generate an interpolated data point $M_{\beta}(X^{u_1}, X^{u_2})$ , defined as follows:
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
M _ {\beta} \left(X ^ {u _ {1}}, X ^ {u _ {2}}\right) = \beta X ^ {u _ {1}} + (1 - \beta) X ^ {u _ {2}} \tag {14}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
In this equation, the hyperparameter $\beta$ is sampled from a Beta distribution, following the setup in Mixup[46]. We apply the linear interpolation consistency regularization,
|
| 186 |
+
|
| 187 |
+
which compares the output at the interpolated data point $f_{\theta_1}(M_\beta(X^{u_1}, X^{u_2}))$ with the outputs of the original data points:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
M _ {\beta} \left(f _ {\theta_ {1}} \left(X ^ {u _ {1}}\right), f _ {\theta_ {1}} \left(X ^ {u _ {2}}\right)\right) \approx f _ {\theta_ {1}} \left(M _ {\beta} \left(X ^ {u _ {1}}, X ^ {u _ {2}}\right)\right) \tag {15}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
# 3.4.2. Noise Interpolation Consistency Regularization
|
| 194 |
+
|
| 195 |
+
We also introduce a noise interpolation consistency constraint by reformulating equation 14 as follows:
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\begin{array}{l} M _ {\beta} \left(X ^ {u _ {1}}, X ^ {u _ {2}}\right) = \beta X ^ {u _ {1}} + (1 - \beta) X ^ {u _ {2}} \tag {16} \\ = X ^ {u _ {1}} + (1 - \beta) \cdot \left(X ^ {u _ {2}} - X ^ {u _ {1}}\right) \\ \end{array}
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
In this formulation, we interpret $(1 - \beta) \cdot (X^{u_2} - X^{u_1})$ as noise interference. The noise consistency loss can then be expressed as:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
M _ {\beta} \left(f _ {\theta_ {1}} \left(X ^ {u _ {1}}\right), f _ {\theta_ {1}} \left(X ^ {u _ {2}}\right)\right) \approx f _ {\theta_ {1}} \left(M _ {\beta} \left(X ^ {u _ {1}}\right)\right) \tag {17}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
This noise consistency constraint ensures that Student Model 1 produces outputs that closely resemble those of the teacher model, even when the input data is slightly perturbed. By assessing the difference between the output at the interpolated point $M_{\beta}(X^{u_1}, X^{u_2})$ and the teacher model's output at $X^{u_1}$ , this loss term encourages Student Model 1 to utilize information from more than just a single data point $X^{u_1}$ . This approach effectively enhances the model's generalization ability.
|
| 208 |
+
|
| 209 |
+
In contrast, Student Model 2 does not leverage labeled data for training. It employs cross pseudo-supervision and applies cross-consistency loss derived from the nonlinear interpolation process. Unlike Student Model 1, which utilizes both labeled and unlabeled data for consistency regularization, Student Model 2 focuses on leveraging shared learning signals between the two student models. This results in markedly different training trajectories for Student Model 1 and Student Model 2, promoting diversity in learning signals throughout the overall learning process.
|
| 210 |
+
|
| 211 |
+
Since Linear Interpolation Consistency Regularization and Noise Interpolation Consistency Regularization serve as two distinct additional training processes for labeled data, we refer to the training process with linear interpolation as LICR, and the one with noise interpolation as NICR.
|
| 212 |
+
|
| 213 |
+
# 4. Loss Functions
|
| 214 |
+
|
| 215 |
+
The overall loss function is comprised of three main components: Cross Teaching Loss, Nonlinear Interpolation Loss, and Differentiation Loss. The notation $\mathcal{L}_{\mathrm{ce}}$ represents the Cross Entropy Loss, and $\mathcal{L}_{\mathrm{dice}}$ represents the Dice Loss.
|
| 216 |
+
|
| 217 |
+
# 4.1. Cross Teaching Loss
|
| 218 |
+
|
| 219 |
+
Cross Teaching Loss leverages pseudo-labels from one model to supervise the other. It consists of a supervised loss and a cross pseudo-supervised loss. The supervised
|
| 220 |
+
|
| 221 |
+
loss ensures effective learning from ground truth labels by combining cross-entropy and Dice losses on both strongly and weakly augmented inputs:
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
\begin{array}{l} \mathcal {L} _ {s u p} ^ {a u g} = \frac {1}{2} \left(\mathcal {L} _ {c e, d i c e} \left(f _ {\theta_ {1}} \left(X _ {w} ^ {i n}\right), Y _ {w} ^ {i n}\right) + \mathcal {L} _ {c e, d i c e} \left(f _ {\theta_ {1}} \left(X _ {w} ^ {o u t}\right), Y _ {w} ^ {o u t}\right)\right) \tag {18} \\ + \frac {1}{2} \left(\mathcal {L} _ {c e, d i c e} \left(f _ {\theta_ {2}} \left(X _ {s} ^ {i n}\right), Y _ {s} ^ {i n}\right) + \mathcal {L} _ {c e, d i c e} \left(f _ {\theta_ {2}} \left(X _ {s} ^ {o u t}\right), Y _ {s} ^ {o u t}\right)\right) \\ \end{array}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
To enforce consistency, the cross pseudo-supervised loss aligns predictions with pseudo-labels from the other model:
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
\begin{array}{l} \mathcal {L} _ {c p s} ^ {a u g} = \mathcal {L} _ {d i c e} \left(f _ {\theta_ {1}} (X _ {w} ^ {i n}), \underset {c \in C} {\operatorname {a r g m a x}} \big (f _ {\theta_ {2}} (X _ {s} ^ {i n}), c \big)\right) \\ + \mathcal {L} _ {\text {d i c e}} \left(f _ {\theta_ {1}} \left(X _ {w} ^ {\text {o u t}}\right), \underset {c \in C} {\operatorname {a r g m a x}} \left(f _ {\theta_ {2}} \left(X _ {s} ^ {\text {o u t}}\right), c\right)\right) (19) \\ + \mathcal {L} _ {d i c e} \left(f _ {\theta_ {2}} \left(X _ {s} ^ {i n}\right), \underset {c \in C} {\operatorname {a r g m a x}} \left(f _ {\theta_ {1}} \left(X _ {w} ^ {i n}\right), c\right)\right) (19) \\ + \mathcal {L} _ {d i c e} \left(f _ {\theta_ {2}} (X _ {s} ^ {o u t}), \underset {c \in \mathcal {C}} {\operatorname {a r g m a x}} (f _ {\theta_ {1}} (X _ {w} ^ {o u t}), c)\right) \\ \end{array}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
# 4.2. Nonlinear Interpolation Consistency Loss
|
| 234 |
+
|
| 235 |
+
The Nonlinear Interpolation Consistency Loss ensures that the model outputs are consistent across interpolated data points. This is represented as follows:
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\begin{array}{l} \mathcal {L} _ {\text {c o n s i s t e n c y}} ^ {\text {N o n l i n e a r}} = \mathcal {L} _ {\text {d i c e}} \left(f _ {\theta_ {1}} \left(X _ {\text {i n p u t}} ^ {F}\right), \underset {c \in C} {\operatorname {a r g m a x}} \left(f _ {\theta_ {2}} \left(X _ {\text {i n p u t}} ^ {F}\right), c\right)\right) \tag {20} \\ + \mathcal {L} _ {d i c e} \left(f _ {\theta_ {2}} (X _ {i n p u t} ^ {F}), \underset {c \in C} {\operatorname {a r g m a x}} (f _ {\theta_ {1}} (X _ {i n p u t} ^ {F}), c)\right) \\ \end{array}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
# 4.3. Differentiation Loss
|
| 242 |
+
|
| 243 |
+
Differentiated training primarily focuses on Student Model 1, incorporating supervised training with labeled data and a regularization term for unlabeled data. Consider labelled samples $(X_{i}^{l},Y_{i}^{l})\sim D^{l}$ from joint distribution $P(X,Y)$ and unlabelled samples $(X_{i}^{u},X_{j}^{u})\sim D^{u}$ from borderline distribution $P(X) = \frac{P(X,Y)}{P(X|Y)}$ . Using SGD for every iteration $t$ , the encoder-decoder parameter $\theta$ is updated minimising the objective function:
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\mathcal {L} _ {\text {s t u d e n t 1}} = \mathcal {L} _ {\text {s t u d e n t 1}} ^ {l} + r (t) \cdot \mathcal {L} _ {\text {s t u d e n t 1}} ^ {u} \tag {21}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
where $\mathcal{L}_{student1}^{l}$ is the cross entropy loss and dice loss applied over the labelled data $D^{l}$ . Therefore, $\mathcal{L}_{student1}^{l}$ can be expressed as:
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\mathcal {L} _ {\text {s t u d e n t 1}} ^ {l} = \frac {1}{2} \left(\mathcal {L} _ {\text {c e}} \left(f _ {\theta_ {1}} \left(X ^ {l}\right), Y ^ {l}\right) + \mathcal {L} _ {\text {d i c e}} \left(f _ {\theta_ {1}} \left(X ^ {l}\right), Y ^ {l}\right)\right) \tag {22}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
$\mathcal{L}_{student1}^u$ is the interpolation consistency regularization loss applied over the unlabelled data $D^{u}$ , $r(t)$ is the ramp function adjusting the weight of $\mathcal{L}_{student1}^u$ after every iteration. $\mathcal{L}_{student1}^u$ has two options: one is the LICR, and the other is the NICR.
|
| 256 |
+
|
| 257 |
+
LICR is calculated over $(X_{i}^{u},X_{j}^{u})$ of sampled minibatches and the pseudo labels $f_{\theta_T}(X_i^u)$ and $f_{\theta_T}(X_j^u)$ .
|
| 258 |
+
|
| 259 |
+
Next, interpolation $M_{\beta}(X_i^u,X_j^u)$ and model prediction $f_{\theta_1}(M_{\beta}(X_i^u,X_j^u))$ are computed updating $\theta$ to bring model prediction closer to the interpolation of the pseudo labels, $M_{\beta}(f_{\theta_T}(X_i^u),f_{\theta_T}(X_j^u))$ . The deviation in model prediction and the interpolation of the pseudo labels is penalised using the mean squared loss. LICR can be expressed as:
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
\mathcal {L} _ {L I C R} ^ {u} = \mathbb {E} _ {X _ {i} ^ {u}, X _ {j} ^ {u}} \left[ \left\| f _ {\theta_ {1}} \left(M _ {\beta} \left(X _ {i} ^ {u}, X _ {j} ^ {u}\right)\right) - M _ {\beta} \left(f _ {\theta_ {T}} \left(X _ {i} ^ {u}\right), f _ {\theta_ {T}} \left(X _ {j} ^ {u}\right)\right) \right\| ^ {2} \right] \tag {23}
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
According to Equations 16 and 17, we can transform LICR to obtain a new loss, NICR, which is defined as:
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
\mathcal {L} _ {N I C R} ^ {u} = \mathbb {E} _ {X _ {i} ^ {u}, X _ {j} ^ {u}} \left[ \| f _ {\theta_ {1}} \left(M _ {\beta} \left(X _ {i} ^ {u}, X _ {i} ^ {u}\right)\right) - f _ {\theta_ {T}} \left(X _ {i} ^ {u}\right) \| ^ {2} \right] \tag {24}
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
# 5. Experiments
|
| 272 |
+
|
| 273 |
+
# 5.1. Dataset and Evaluation Metrics
|
| 274 |
+
|
| 275 |
+
ACDC Dataset: The ACDC dataset [5] consists of 200 annotated short-axis cardiac cine-MR images from 100 patients across four classes. 2D segmentation is more common than 3D [2]. Evaluation metrics include Dice Similarity Coefficient (DSC), Jaccard, $95\%$ Hausdorff Distance (95HD), and Average Surface Distance (ASD). Following the methods of BCP[3] and ABD[9], the input size was set to $256 \times 256$ , with a batch size of 24 for training.
|
| 276 |
+
|
| 277 |
+
PROMISE12 Dataset: The PROMISE12 dataset [17] was introduced in the MICCAI 2012 prostate segmentation challenge, comprising MRI scans of 50 patients. All 3D scans are converted into 2D slices. DSC and ASD are used for evaluation. Following ABD[9], the input size was set to $224 \times 224$ with a batch size of 16 for training.
|
| 278 |
+
|
| 279 |
+
MS-CMRSeg 2019:The MS-CMRSeg 2019 dataset [35, 51] from the MICCAI 2019 challenge includes 45 multisequence cardiac MRI scans of cardiomyopathy patients. Evaluation metrics are DSC, Jaccard, 95HD, and ASD. Following the DiffRect[19] approach, the input size was $256 \times$ 256. The training batch size was set to 8.
|
| 280 |
+
|
| 281 |
+
# 5.2. Comparison with SOTA Methods
|
| 282 |
+
|
| 283 |
+
Compared to SOTA methods on the ACDC test set, $\beta$ -FFT demonstrates superior performance, particularly when using $10\%$ labeled data, where the model achieves Dice and Jaccard scores of $90.50 \pm 0.04\%$ and $83.12 \pm 0.12\%$ , outperforming many recent methods, such as AD-MT and ABD.
|
| 284 |
+
|
| 285 |
+
On the PROMISE12 test set, $\beta$ -FFT also outperforms existing methods with $20\%$ labeled data, achieving a Dice score of $83.75 \pm 0.65\%$ and an ASD of $1.20 \pm 0.07$ , surpassing AD-MT and ABD.
|
| 286 |
+
|
| 287 |
+
On the MS-CMRSEG 2019 dataset, $\beta$ -FFT achieves a Dice score of $87.79 \pm 0.04\%$ and a Jaccard index of $78.60 \pm 0.06\%$ significantly outperforming popular semi-supervised approaches and approaching fully supervised performance with less labeled data.
|
| 288 |
+
|
| 289 |
+
Figure 3 presents a visual comparison of our method with other similar approaches.
|
| 290 |
+
|
| 291 |
+
Table 1. Comparisons with other methods on the ACDC test set.
|
| 292 |
+
|
| 293 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Scans used</td><td colspan="4">Metrics</td></tr><tr><td>Labeled</td><td>Unlabeled</td><td>DSC↑</td><td>Jaccard↑</td><td>95HD↓</td><td>ASD↓</td></tr><tr><td rowspan="3">U-Net (MICCAI'2015) [29]</td><td>3(5%)</td><td>0</td><td>47.83</td><td>37.01</td><td>31.16</td><td>12.62</td></tr><tr><td>7(10%)</td><td>0</td><td>79.41</td><td>68.11</td><td>9.35</td><td>2.70</td></tr><tr><td>70(All)</td><td>0</td><td>91.44</td><td>84.59</td><td>4.30</td><td>0.99</td></tr><tr><td>DTC (AAAI'2021) [20]</td><td></td><td></td><td>56.90</td><td>45.67</td><td>23.36</td><td>7.39</td></tr><tr><td>URPC (MICCAI'2021) [21]</td><td></td><td></td><td>55.87</td><td>44.64</td><td>13.60</td><td>3.74</td></tr><tr><td>MC-Net (MICCAI'2021) [36]</td><td></td><td></td><td>62.85</td><td>52.29</td><td>7.62</td><td>2.33</td></tr><tr><td>SS-Net (MICCAI'2022) [38]</td><td></td><td></td><td>65.83</td><td>55.38</td><td>6.67</td><td>2.28</td></tr><tr><td>SCP-Net (MICCAI'2023) [47]</td><td>3(5%)</td><td>67(95%)</td><td>87.27</td><td>-</td><td>-</td><td>2.65</td></tr><tr><td>Cross Teaching (MIDL'2022) [22]</td><td></td><td></td><td>65.60</td><td>-</td><td>16.2</td><td>-</td></tr><tr><td>BCP (CVPR'2023) [3]</td><td></td><td></td><td>87.59</td><td>78.67</td><td>1.90</td><td>0.67</td></tr><tr><td>DiffRec (MICCAI'2024) [19]</td><td></td><td></td><td>82.46</td><td>71.76</td><td>7.18</td><td>1.94</td></tr><tr><td>ABD (CVPR'2024) [9]</td><td></td><td></td><td>88.96</td><td>80.70</td><td>1.57</td><td>0.52</td></tr><tr><td>AD-MT (ECCV'2024) [48]</td><td></td><td></td><td>88.75</td><td>80.41</td><td>1.48</td><td>0.50</td></tr><tr><td>Ours-β-FFT</td><td></td><td></td><td>89.46±0.12</td><td>81.46±0.22</td><td>1.78±0.32</td><td>0.55±0.10</td></tr><tr><td>DTC (AAAI'2021) [20]</td><td></td><td></td><td>84.29</td><td>73.92</td><td>12.81</td><td>4.01</td></tr><tr><td>URPC (MICCAI'2021) [21]</td><td></td><td></td><td>83.10</td><td>72.41</td><td>4.84</td><td>1.53</td></tr><tr><td>MC-Net (MICCAI'2021) [36]</td><td></td><td></td><td>86.44</td><td>77.04</td><td>5.50</td><td>1.84</td></tr><tr><td>SS-Net (MICCAI'2022) [38]</td><td></td><td></td><td>86.78</td><td>77.67</td><td>6.07</td><td>1.40</td></tr><tr><td>Cross Teaching (MIDL'2022) [22]</td><td></td><td></td><td>86.45</td><td>77.02</td><td>6.30</td><td>1.86</td></tr><tr><td>SCP-Net (MICCAI'2023) [47]</td><td>7(10%)</td><td>63(90%)</td><td>89.69</td><td>-</td><td>-</td><td>0.73</td></tr><tr><td>PLGCL (CVPR'2023) [4]</td><td></td><td></td><td>89.1</td><td>-</td><td>4.98</td><td>1.80</td></tr><tr><td>BCP (CVPR'2023) [3]</td><td></td><td></td><td>88.84</td><td>80.62</td><td>3.98</td><td>1.17</td></tr><tr><td>DiffRec (MICCAI'2024) [19]</td><td></td><td></td><td>89.27</td><td>81.13</td><td>3.85</td><td>1.00</td></tr><tr><td>ABD (CVPR'2024) [9]</td><td></td><td></td><td>89.81</td><td>81.95</td><td>1.46</td><td>0.49</td></tr><tr><td>AD-MT (ECCV'2024) [48]</td><td></td><td></td><td>89.46</td><td>81.47</td><td>1.51</td><td>0.44</td></tr><tr><td>Ours-β-FFT</td><td></td><td></td><td>90.50±0.04</td><td>83.12±0.12</td><td>2.38±0.87</td><td>0.62±0.13</td></tr></table>
|
| 294 |
+
|
| 295 |
+
Table 2. Comparisons with state-of-the-art semi-supervised segmentation methods on the PROMISE12 test set.
|
| 296 |
+
|
| 297 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Scans used</td><td colspan="2">Metrics</td></tr><tr><td>Labeled</td><td>Unlabeled</td><td>DSC↑</td><td>ASD↓</td></tr><tr><td rowspan="2">U-Net [29]</td><td>7(20%)</td><td>0</td><td>60.88</td><td>13.87</td></tr><tr><td>35(100%)</td><td>0</td><td>84.76</td><td>1.58</td></tr><tr><td>CCT [25]</td><td></td><td></td><td>71.43</td><td>16.61</td></tr><tr><td>URPC [21]</td><td></td><td></td><td>63.23</td><td>4.33</td></tr><tr><td>SS-Net [38]</td><td></td><td></td><td>62.31</td><td>4.36</td></tr><tr><td>SLC-Net [18]</td><td>7(20%)</td><td>28(80%)</td><td>68.31</td><td>4.69</td></tr><tr><td>SCP-Net [47]</td><td></td><td></td><td>77.06</td><td>3.52</td></tr><tr><td>ABD [9]</td><td></td><td></td><td>82.06</td><td>1.33</td></tr><tr><td>AD-MT [48]</td><td></td><td></td><td>79.82</td><td>1.77</td></tr><tr><td>Ours-β-FFT</td><td></td><td></td><td>83.75±0.65</td><td>1.20±0.07</td></tr></table>
|
| 298 |
+
|
| 299 |
+
Table 3. Segmentation results on MS-CMRSEG 2019 with $20\%$ data labeled.
|
| 300 |
+
|
| 301 |
+
<table><tr><td>Method</td><td>Dice ↑</td><td>Jaccard↑</td><td>HD95↓</td><td>ASD↓</td></tr><tr><td>UAMT [45]</td><td>84.27</td><td>73.69</td><td>12.15</td><td>4.18</td></tr><tr><td>FixMatch [32]</td><td>84.31</td><td>73.57</td><td>17.79</td><td>4.81</td></tr><tr><td>CPS [8]</td><td>83.66</td><td>73.03</td><td>15.01</td><td>4.30</td></tr><tr><td>ICT [34]</td><td>83.66</td><td>73.06</td><td>17.24</td><td>4.85</td></tr><tr><td>MCNetV2 [37]</td><td>83.93</td><td>73.45</td><td>13.10</td><td>3.39</td></tr><tr><td>INCL [50]</td><td>84.33</td><td>73.92</td><td>9.95</td><td>2.61</td></tr><tr><td>DiffRect [19]</td><td>86.78</td><td>77.13</td><td>6.39</td><td>1.85</td></tr><tr><td>ABD [9]</td><td>87.25</td><td>77.77</td><td>11.74</td><td>4.25</td></tr><tr><td>AD-MT [48]</td><td>86.30</td><td>76.39</td><td>3.56</td><td>1.21</td></tr><tr><td>Ours-β-FFT</td><td>87.79±0.04</td><td>78.60±0.06</td><td>3.75±0.36</td><td>1.62±0.20</td></tr><tr><td>Supervised [29]</td><td>88.19</td><td>79.28</td><td>4.21</td><td>1.32</td></tr></table>
|
| 302 |
+
|
| 303 |
+
# 5.3. Ablation Study Analysis
|
| 304 |
+
|
| 305 |
+
The baseline method we use is an improved version of the BCP[3] from the ABD[9], combined with the CrossTeaching framework[22]. Its structure consists of a meantacher framework with two student models and one teacher model. All experiments are conducted on the ACDC dataset, with $10\%$ of the data labeled.
|
| 306 |
+
|
| 307 |
+

|
| 308 |
+
Figure 3. Visualization of segmentation results on ACDC dataset with $10\%$ labeled data, PROMISE12 dataset with $20\%$ labeled data, and MS-CMRSEG 2019 dataset with $20\%$ labeled data.
|
| 309 |
+
|
| 310 |
+
# 5.3.1. Effect of Non-linear Interpolation Strategy
|
| 311 |
+
|
| 312 |
+
The experimental results demonstrate that the low-frequency component enhancement method, utilizing low-pass filters, effectively improves model performance on the ACDC dataset. Specifically, the $20 \times 20$ filter size achieved the best results on the validation set, while the $30 \times 30$ filter size excelled on the test set. This finding highlights the importance of selecting an appropriate filter size to balance detail and global information, ultimately optimizing the model's generalization ability.
|
| 313 |
+
|
| 314 |
+
Table 4. Effect of Low-Pass Filter Size on Model Performance.
|
| 315 |
+
|
| 316 |
+
<table><tr><td rowspan="2">H</td><td colspan="2">ACDC Validation dataset</td><td colspan="2">ACDC Test dataset</td></tr><tr><td>Dice↑</td><td>Jaccard↑</td><td>Dice↑</td><td>Jaccard↑</td></tr><tr><td>None</td><td>89.53</td><td>81.86</td><td>89.60</td><td>81.70</td></tr><tr><td>20x20</td><td>90.23</td><td>82.94</td><td>88.96</td><td>80.76</td></tr><tr><td>30x30</td><td>89.77</td><td>82.27</td><td>89.96</td><td>82.29</td></tr><tr><td>40x40</td><td>89.93</td><td>82.58</td><td>89.65</td><td>81.80</td></tr><tr><td>50x50</td><td>89.81</td><td>82.38</td><td>89.49</td><td>81.55</td></tr></table>
|
| 317 |
+
|
| 318 |
+
# 5.3.2. Impact of Differentiated Training Strategies on Model Performance
|
| 319 |
+
|
| 320 |
+
In this analysis, we use Student Model 1 as an example to validate the impact of differentiated training on model performance across the ACDC validation and test datasets.
|
| 321 |
+
|
| 322 |
+
The table 5 indicates that standardized training is conducted jointly for Student Model 1 and Student Model 2, while the differentiated training section represents additional training specifically for Student Model 1. Here, $w$ denotes the use of weakly augmented data, and $s$ denotes the use of strongly augmented data. The results show that by introducing the two differentiated training strategies, LICR and NICR, in the baseline model, the Dice value of the model on the test set significantly improves, with LICR(w) achieving a Dice value of 90.24 and NICR(w) reaching 90.23, highlighting the effectiveness of differentiated training in enhancing model performance.
|
| 323 |
+
|
| 324 |
+
Employing the Non-linear interpolation strategy, the
|
| 325 |
+
|
| 326 |
+
Table 5. Ablation study of Differentiated Training on ACDC Validation and Test Datasets.
|
| 327 |
+
|
| 328 |
+
<table><tr><td colspan="2">Standardized Training</td><td colspan="4">Differentiated Training</td><td colspan="2">ACDC Validation</td><td colspan="2">ACDC Test</td></tr><tr><td>Baseline</td><td>Non-linear</td><td>LICR(w)</td><td>NICR(w)</td><td>LICR(s)</td><td>NICR(s)</td><td>Dice†</td><td>Jaccard†</td><td>Dice†</td><td>Jaccard†</td></tr><tr><td>✓</td><td></td><td></td><td></td><td></td><td></td><td>89.53</td><td>81.86</td><td>89.60</td><td>81.70</td></tr><tr><td>✓</td><td></td><td>✓</td><td></td><td></td><td></td><td>89.95</td><td>82.58</td><td>90.24</td><td>82.73</td></tr><tr><td>✓</td><td></td><td></td><td>✓</td><td></td><td></td><td>90.34</td><td>82.83</td><td>90.23</td><td>82.69</td></tr><tr><td>✓</td><td></td><td></td><td></td><td>✓</td><td></td><td>90.25</td><td>82.97</td><td>90.26</td><td>82.70</td></tr><tr><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td><td>90.38</td><td>83.23</td><td>90.26</td><td>82.74</td></tr><tr><td>✓</td><td></td><td>✓</td><td></td><td>✓</td><td></td><td>90.09</td><td>82.79</td><td>90.13</td><td>82.47</td></tr><tr><td>✓</td><td></td><td>✓</td><td></td><td></td><td>✓</td><td>90.10</td><td>82.70</td><td>90.10</td><td>82.54</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>89.77</td><td>82.27</td><td>89.96</td><td>82.29</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td>90.53</td><td>83.54</td><td>90.27</td><td>82.76</td></tr><tr><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td></td><td>90.21</td><td>83.01</td><td>90.34</td><td>82.84</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td>✓</td><td></td><td>90.30</td><td>83.14</td><td>90.38</td><td>82.95</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td>✓</td><td>90.31</td><td>83.02</td><td>89.97</td><td>82.32</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td>90.77</td><td>83.81</td><td>90.54</td><td>83.23</td></tr></table>
|
| 329 |
+
|
| 330 |
+
model's performance as measured by the Dice value on the test set reaches 90.54 when both LICR(w) and LICR(s) are applied simultaneously, confirming the effectiveness of our differentiated training approach. This outcome indicates that optimizing training strategies can significantly enhance model performance. Visualizing the training process, as shown in Figure 4, further illustrates that the performance of the two sub-models in $\beta$ -FFT is significantly better than the Baseline, highlighting the benefits of our proposed method.
|
| 331 |
+
|
| 332 |
+

|
| 333 |
+
Figure 4. Comparison of the training process between $\beta$ -FFT and Baseline methods.
|
| 334 |
+
|
| 335 |
+

|
| 336 |
+
|
| 337 |
+
# 5.3.3. Exploring the De-Homogenization Effects of Differentiated Training
|
| 338 |
+
|
| 339 |
+
To further investigate the impact of differentiated training on model homogenization, we first applied LICR(w) to Student Model 1 and LICR(s) to Student Model 2. This strategy aligns with the initial data augmentation strengths (strong and weak augmentation) assigned to the two student models, constituting a synchronized operation and enabling collaborative training. Experimental results demonstrate that this approach indeed enhances model performance. However, when we removed LICR(w) from Student Model 1 or LICR(s) from Student Model 2, the model performance further improved, indicating that applying LICR to a single sub-model is more effective than applying it to both simultaneously.
|
| 340 |
+
|
| 341 |
+
Further analysis reveals that when LICR(w) and LICR(s) are applied exclusively to Student Model 1, the model achieves its highest performance. As shown in Table 6, the Dice and Jaccard scores on the ACDC validation and test datasets reach 90.77 / 83.81 and 90.54 / 83.23. In contrast,
|
| 342 |
+
|
| 343 |
+
Table 6. Ablation study results demonstrating the performance of two models after applying Non-Linear Interpolation, followed by differentiated training using LICR(w) and LICR(s) on the ACDC dataset. The gray row indicates that both Student Model 1 and Student Model 2 undergo simultaneous training, rather than differentiated training.
|
| 344 |
+
|
| 345 |
+
<table><tr><td colspan="2">Differentiation Training</td><td colspan="2">ACDC Validation dataset</td><td colspan="2">ACDC Test dataset</td></tr><tr><td>LICR(w)</td><td>LICR(s)</td><td>Dice↑</td><td>Jaccard↑</td><td>Dice ↑</td><td>Jaccard ↑</td></tr><tr><td rowspan="2">Student 1</td><td></td><td>89.77</td><td>82.27</td><td>89.96</td><td>82.29</td></tr><tr><td></td><td>90.53</td><td>83.54</td><td>90.27</td><td>82.76</td></tr><tr><td>Student 1</td><td>Student 2</td><td>90.19</td><td>83.01</td><td>90.10</td><td>82.47</td></tr><tr><td rowspan="2">Student 1</td><td>Student 2</td><td>90.20</td><td>83.00</td><td>90.06</td><td>82.47</td></tr><tr><td>Student 1</td><td>90.77</td><td>83.81</td><td>90.54</td><td>83.23</td></tr></table>
|
| 346 |
+
|
| 347 |
+
when both student models undergo simultaneous LICR(w) and LICR(s) training (as indicated by the gray row), performance slightly decreases. This further confirms that differentiated training effectively enhances model performance and mitigates homogenization issues.
|
| 348 |
+
|
| 349 |
+
# 5.3.4. Effect of Beta Distribution Sampling in Differentiated Training
|
| 350 |
+
|
| 351 |
+
We investigated the impact of Beta distribution parameter $(a, a)$ on sample mixing and model performance. By adjusting $a$ , we controlled sample diversity and feature complexity. Ablation results in Figure 5 show that smaller Beta parameters improve Dice and Jaccard scores, peaking at $\mathbf{Beta}(0.1, 0.1)$ with validation/test scores of 90.77 / 83.81 and 90.54 / 83.23, respectively. This indicates that lower Beta values enhance model generalization and robustness.
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
Figure 5. Differentiated training was conducted on Student 1 under the conditions of simultaneously applying nonlinear interpolation and both LICR(w) and LICR(s).
|
| 355 |
+
|
| 356 |
+

|
| 357 |
+
|
| 358 |
+
# 6. Conclusion
|
| 359 |
+
|
| 360 |
+
In this study, we address the issue of homogenization in constraining from both data and structural perspectives. We distinguish different sub-models using strong and weak augmentations and introduce a nonlinear interpolation method based on the Fast Fourier Transform (FFT) to generate more diverse training samples, thereby enhancing the model's generalization ability. Furthermore, we implement differentiated training by applying additional training to one of the models, effectively reducing homogenization. Extensive ablation experiments validate the effectiveness of our approach, with results demonstrating that $\beta$ -FFT outperforms current state-of-the-art (SOTA) methods on three public medical image datasets.
|
| 361 |
+
|
| 362 |
+
Acknowledgment. This research was supported by: The Outstanding Award for Talent Project of the Chinese Academy of Sciences [Grant Number 29J20-052-III]; The Shaanxi Province Technological Innovation Guidance Special Project: Regional Science and Technology Innovation Center, Strategic Scientific and Technological Strength Category (No.2024QY-SZX-26); The Key Project for Teaching Research of the Medical Department of Wuhan University [Grant Number 2024ZD21]; The Key R&D Project of Hubei Province [Grant Number 2023BCB024].
|
| 363 |
+
|
| 364 |
+
# References
|
| 365 |
+
|
| 366 |
+
[1] Eric Arazo, Diego Ortega, Paul Albert, Noel E O'Connor, and Kevin McGuinness. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International joint conference on neural networks (IJCNN), pages 1-8. IEEE, 2020. 1, 2
|
| 367 |
+
[2] Wenjia Bai, Ozan Oktay, Matthew Sinclair, Hideaki Suzuki, Martin Rajchl, Giacomo Tarroni, Ben Glocker, Andrew King, Paul M Matthews, and Daniel Rueckert. Semisupervised learning for network-based cardiac mr image segmentation. In Medical Image Computing and Computer-Assisted Intervention, pages 253-260. Springer, 2017. 6
|
| 368 |
+
[3] Yunhao Bai, Duowen Chen, Qingli Li, Wei Shen, and Yan Wang. Bidirectional copy-paste for semi-supervised medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11514-11524, 2023. 2, 3, 6, 7
|
| 369 |
+
[4] Hritam Basak and Zhaozheng Yin. Pseudo-label guided contrastive learning for semi-supervised medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19786-19797, 2023. 7
|
| 370 |
+
[5] Olivier Bernard, Alain Lalande, Clement Zotti, Frederick Cervenansky, Xin Yang, Pheng-Ann Heng, Irem Cetin, Karim Lekadir, Oscar Camara, Miguel Angel Gonzalez Ballester, et al. Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE transactions on medical imaging, 37 (11):2514-2525, 2018. 6
|
| 371 |
+
[6] Baixu Chen, Junguang Jiang, Ximei Wang, Pengfei Wan, Jianmin Wang, and Mingsheng Long. Debiased self-training for semi-supervised learning. Advances in Neural Information Processing Systems, 35:32424-32437, 2022. 2
|
| 372 |
+
[7] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020. 3
|
| 373 |
+
[8] Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang. Semi-supervised semantic segmentation with cross pseudo supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2613-2622, 2021. 2, 7
|
| 374 |
+
[9] Hanyang Chi, Jian Pang, Bingfeng Zhang, and Weifeng Liu. Adaptive bidirectional displacement for semi-supervised medical image segmentation. In Proceedings of the
|
| 375 |
+
|
| 376 |
+
IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4070-4080, 2024. 3, 6, 7
|
| 377 |
+
[10] Kaiwen Cui, Jiaxing Huang, Zhipeng Luo, Gongjie Zhang, Fangneng Zhan, and Shijian Lu. Genco: Generative constraining for generative adversarial networks with limited data. In Proceedings of the AAAI conference on artificial intelligence, pages 499-507, 2022. 3
|
| 378 |
+
[11] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. Advances in neural information processing systems, 17, 2004. 1
|
| 379 |
+
[12] Nastassya Horlava, Alisa Mironenko, Sebastian Niehaus, Sebastian Wagner, Ingo Roeder, and Nico Scherf. A comparative study of semi-and self-supervised semantic segmentation of biomedical microscopy data. arXiv preprint arXiv:2011.08076, 2020. 2
|
| 380 |
+
[13] Zhanghan Ke, Daoye Wang, Qiong Yan, Jimmy Ren, and Rynson WH Lau. Dual student: Breaking the limits of the teacher in semi-supervised learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6728-6736, 2019. 2
|
| 381 |
+
[14] Junnan Li, Caiming Xiong, and Steven CH Hoi. Comatch: Semi-supervised learning with contrastive graph regularization. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9475-9484, 2021. 3
|
| 382 |
+
[15] Xiaoxu Li, Yu Peng, and Min Xu. Patch-shuffle-based semi-supervised segmentation of bone computed tomography via consistent learning. Biomedical Signal Processing and Control, 80:104239, 2023. 3
|
| 383 |
+
[16] Yijiang Li, Xinjiang Wang, Lihe Yang, Litong Feng, Wayne Zhang, and Ying Gao. Diverse cotraining makes strong semi-supervised segmentor. arXiv preprint arXiv:2308.09281, 2023. 2, 3
|
| 384 |
+
[17] Geert Litjens, Robert Toth, Wendy Van De Ven, Caroline Hoeks, Sjoerd Kerkstra, Bram Van Ginneken, Graham Vincent, Gwenael Guillard, Neil Birbeck, Jindang Zhang, et al. Evaluation of prostate segmentation algorithms for mri: the promise12 challenge. Medical image analysis, 18(2):359-373, 2014. 6
|
| 385 |
+
[18] Jinhua Liu, Christian Desrosiers, and Yuanfeng Zhou. Semi-supervised medical image segmentation using cross-model pseudo-supervision with shape awareness and local context constraints. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 140-150. Springer, 2022. 7
|
| 386 |
+
[19] Xinyu Liu, Wuyang Li, and Yixuan Yuan. Diffrect: Latent diffusion label rectification for semi-supervised medical image segmentation. arXiv preprint arXiv:2407.09918, 2024.6, 7
|
| 387 |
+
[20] Xiangde Luo, Jieneng Chen, Tao Song, and Guotai Wang. Semi-supervised medical image segmentation through dual-task consistency. In Proceedings of the AAAI conference on artificial intelligence, number 10, pages 8801-8809, 2021. 2, 7
|
| 388 |
+
[21] Xiangde Luo, Wenjun Liao, Jieneng Chen, Tao Song, Yi-nan Chen, Shichuan Zhang, Nianyong Chen, Guotai Wang, and Shaoting Zhang. Efficient semi-supervised gross target volume of nasopharyngeal carcinoma segmentation via uncertainty rectified pyramid consistency. In Medical Image
|
| 389 |
+
|
| 390 |
+
Computing and Computer Assisted Intervention, pages 318-329. Springer, 2021. 7
|
| 391 |
+
[22] Xiangde Luo, Minhao Hu, Tao Song, Guotai Wang, and Shaoting Zhang. Semi-supervised medical image segmentation via cross teaching between cnn and transformer. In International Conference on Medical Imaging with Deep Learning, pages 820-833. PMLR, 2022. 3, 7
|
| 392 |
+
[23] Fei Lyu, Mang Ye, Jonathan Frederik Carlsen, Kenny Erleben, Sune Darkner, and Pong C Yuen. Pseudo-label guided image synthesis for semi-supervised Covid-19 pneumonia infection segmentation. IEEE Transactions on Medical Imaging, 42(3):797-809, 2022. 2
|
| 393 |
+
[24] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018. 1
|
| 394 |
+
[25] Yassine Ouali, Céline Hudelot, and Myriam Tami. Semisupervised semantic segmentation with cross-consistency training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12674-12684, 2020. 2, 7
|
| 395 |
+
[26] Qianyao Qiang, Bin Zhang, Feiping Nie, and Fei Wang. Multi-view semi-supervised learning with adaptive graph fusion. Neurocomputing, 557:126685, 2023. 3
|
| 396 |
+
[27] Siyuan Qiao, Wei Shen, Zhishuai Zhang, Bo Wang, and Alan Yuille. Deep co-training for semi-supervised image recognition. In Proceedings of the European conference on computer vision (eccv), pages 135-152, 2018. 3
|
| 397 |
+
[28] Ilija Radosavovic, Piotr Dollar, Ross Girshick, Georgia Gkioxari, and Kaiming He. Data distillation: Towards omnisupervised learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4119-4128, 2018. 2
|
| 398 |
+
[29] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, pages 234-241. Springer, 2015. 7
|
| 399 |
+
[30] Chuck Rosenberg, Martial Hebert, and Henry Schneiderman. Semi-supervised self-training of object detection models. In IEEE Workshop on Applications of Computer Vision, 2005. 1
|
| 400 |
+
[31] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Advances in neural information processing systems, 29, 2016. 1
|
| 401 |
+
[32] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems, 33:596-608, 2020. 2, 3, 7
|
| 402 |
+
[33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017. 2
|
| 403 |
+
|
| 404 |
+
[34] Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, and David Lopez-Paz. Interp-. olation consistency training for semi-supervised learning. Neural Networks, 145:90-106, 2022. 2, 7
|
| 405 |
+
[35] Fuping Wu and Xiahai Zhuang. Minimizing estimated risks on unlabeled data: A new formulation for semi-supervised medical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):6021-6036, 2022. 6
|
| 406 |
+
[36] Yicheng Wu, Minfeng Xu, Zongyuan Ge, Jianfei Cai, and Lei Zhang. Semi-supervised left atrium segmentation with mutual consistency training. In Medical Image Computing and Computer Assisted Intervention, pages 297-306. Springer, 2021. 7
|
| 407 |
+
[37] Yicheng Wu, Zongyuan Ge, Donghao Zhang, Minfeng Xu, Lei Zhang, Yong Xia, and Jianfei Cai. Mutual consistency learning for semi-supervised medical image segmentation. Medical Image Analysis, 81:102530, 2022. 7
|
| 408 |
+
[38] Yicheng Wu, Zhonghua Wu, Qianyi Wu, Zongyuan Ge, and Jianfei Cai. Exploring smoothness and class-separation for semi-supervised medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 34-43. Springer, 2022. 7
|
| 409 |
+
[39] Yingda Xia, Dong Yang, Zhiding Yu, Fengze Liu, Jinzheng Cai, Lequan Yu, Zhuotun Zhu, Daguang Xu, Alan Yuille, and Holger Roth. Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation. Medical image analysis, 65:101766, 2020. 3
|
| 410 |
+
[40] Yang Xiu, Xinyi Zheng, Linlin Sun, and Zhuohao Fang. Fremix: Frequency-based mixup for data augmentation. Wireless Communications and Mobile Computing, 2022(1): 5323327, 2022. 3
|
| 411 |
+
[41] Qinwei Xu, Ruipeng Zhang, Ya Zhang, Yanfeng Wang, and Qi Tian. A fourier-based framework for domain generalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14383-14392, 2021. 3
|
| 412 |
+
[42] Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, and Yang Gao. St++: Make self-training work better for semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4268-4277, 2022. 3
|
| 413 |
+
[43] Yanchao Yang and Stefano Soatto. Fda: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4085-4095, 2020. 3
|
| 414 |
+
[44] Boon Peng Yap and Beng Koon Ng. Cut-paste consistency learning for semi-supervised lesion segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 6160-6169, 2023. 3
|
| 415 |
+
[45] Lequan Yu, Shujun Wang, Xiaomeng Li, Chi-Wing Fu, and Pheng-Ann Heng. Uncertainty-aware self-ensembling model for semi-supervised 3d left atrium segmentation. In Medical image computing and computer assisted intervention-MICCAI 2019: 22nd international conference, Shenzhen,
|
| 416 |
+
|
| 417 |
+
China, October 13-17, 2019, proceedings, part II 22, pages 605-613. Springer, 2019. 2, 7
|
| 418 |
+
[46] Hongyi Zhang. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. 5
|
| 419 |
+
[47] Zhenxi Zhang, Ran Ran, Chunna Tian, Heng Zhou, Xin Li, Fan Yang, and Zhicheng Jiao. Self-aware and cross-sample prototypical learning for semi-supervised medical image segmentation. arXiv preprint arXiv:2305.16214, 2023. 7
|
| 420 |
+
[48] Zhen Zhao, Zicheng Wang, Longyue Wang, Dian Yu, Yixuan Yuan, and Luping Zhou. Alternate diverse teaching for semi-supervised medical image segmentation. In European Conference on Computer Vision, pages 227-243. Springer, 2025. 3, 7
|
| 421 |
+
[49] Tianyi Zhou, Shengjie Wang, and Jeff Bilmes. Time-consistent self-supervision for semi-supervised learning. In International conference on machine learning, pages 11523-11533. PMLR, 2020. 2
|
| 422 |
+
[50] Ye Zhu, Jie Yang, Si-Qi Liu, and Ruimao Zhang. Inherent consistent learning for accurate semi-supervised medical image segmentation. arXiv preprint arXiv:2303.14175, 2023.7
|
| 423 |
+
[51] Xiahai Zhuang. Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE transactions on pattern analysis and machine intelligence, 41(12): 2933-2946, 2018. 6
|
CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e91407ea1477d3ceefc1f0599f121bba7a6f8edf56d136c2ee3531c932a9cd19
|
| 3 |
+
size 688551
|
CVPR/2025/beta-FFT_ Nonlinear Interpolation and Differentiated Training Strategies for Semi-Supervised Medical Image Segmentation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a5136e3c7e5cb5bd44fbe0482437bef79c1d562579230bf7f4ae285c3c93f50f
|
| 3 |
+
size 465095
|
CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/19b2c42f-ce51-44ed-b899-64702962cff1_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:54760c5fe98f26aa1a0266ab8cb11afae4c74529cc0c2cba9266c3c7b2abc2cb
|
| 3 |
+
size 93804
|
CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/19b2c42f-ce51-44ed-b899-64702962cff1_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a13f0a01f9a4d68d4ba7b7290b6d7dcba8c05cfa6574df4646dcb81d0fb27adc
|
| 3 |
+
size 118562
|
CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/19b2c42f-ce51-44ed-b899-64702962cff1_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fbc7cd921812c35a492dc73c24091282f4d623278fda82cbc8671e490ec24560
|
| 3 |
+
size 2679975
|
CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/full.md
ADDED
|
@@ -0,0 +1,328 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# dFLMoE: Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis
|
| 2 |
+
|
| 3 |
+
Luyuan Xie $^{1,2,3*}$
|
| 4 |
+
Nan Xi $^{4}$
|
| 5 |
+
|
| 6 |
+
Tianyu Luan $^{1,3,4\dagger}$ Yuejian Fang $^{1,2,3}$
|
| 7 |
+
|
| 8 |
+
Wenyuan Cai $^{1}$
|
| 9 |
+
Qingni Shen $^{1,2,3}$
|
| 10 |
+
|
| 11 |
+
Guochen Yan $^{2,3}$ Zhonghai Wu $^{1,2,3}$
|
| 12 |
+
|
| 13 |
+
Zhaoyu Chen $^{1,2,3}$
|
| 14 |
+
Junsong Yuan $^{4}$
|
| 15 |
+
|
| 16 |
+
$^{1}$ School of Software and Microelectronics, Peking University $^{2}$ PKU-OCTA Laboratory for Blockchain and Privacy Computing $^{3}$ National Engineering Research Center for Software Engineering, Peking University $^{4}$ State University of New York at Buffalo
|
| 17 |
+
|
| 18 |
+
# Abstract
|
| 19 |
+
|
| 20 |
+
Federated learning has wide applications in the medical field. It enables knowledge sharing among different healthcare institutes while protecting patients' privacy. However, existing federated learning systems are typically centralized, requiring clients to upload client-specific knowledge to a central server for aggregation. This centralized approach would integrate the knowledge from each client into a centralized server, and the knowledge would be already undermined during the centralized integration before it reaches back to each client. Besides, the centralized approach also creates a dependency on the central server, which may affect training stability if the server malfunctions or connections are unstable. To address these issues, we propose a decentralized federated learning framework named dFLMoE. In our framework, clients directly exchange lightweight head models with each other. After exchanging, each client treats both local and received head models as individual experts, and utilizes a client-specific Mixture of Experts (MoE) approach to make collective decisions. This design not only reduces the knowledge damage with client-specific aggregations but also removes the dependency on the central server to enhance the robustness of the framework. We validate our framework on multiple medical tasks, demonstrating that our method evidently outperforms state-of-the-art approaches under both model homogeneity and heterogeneity settings.
|
| 21 |
+
|
| 22 |
+
# 1. Introduction
|
| 23 |
+
|
| 24 |
+
Federated learning has extensive medical applications. A well-designed federated system can protect data privacy while sharing high-level knowledge among different clients. This enables each client's network to receive additional sup
|
| 25 |
+
|
| 26 |
+
(a) Centralized Federated Learning (Previous)
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
(b) Decentralized Federated Learning (Ours)
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Figure 1. (a) Previous centralized federated learning framework aggregates knowledge from each client in a centralized server. This process can lead to knowledge damage in centralized aggregation and the framework is heavily dependent on the central server's stability. (b) Our decentralized framework dFLMoE eliminates centralized server and aggregation by having clients directly exchange knowledge with each other. Each client then uses a Mixture of Experts (MoE) approach to adaptively combine local and received knowledge.
|
| 33 |
+
|
| 34 |
+
port and achieve better performance and generalizability. In medical scenarios, patient data is hard to collect and has strong privacy protection requirements. Federated learning systems can effectively address the data limitations at each healthcare institution, enhancing their model performance and generalizability while ensuring privacy.
|
| 35 |
+
|
| 36 |
+
Existing federated learning systems, such as [8, 9, 27, 36], are designed in a centralized manner. In each training round, each client needs to upload client-specific knowledge (e.g. model parameters) to a central server for aggregation, which is then distributed back to each client. Regarding aggregation methods, they require a unified model structure [8, 9, 36], a centralized messenger model [52, 53, 56], or a unified public dataset [20, 23, 28, 58]. Such centralized designs achieve
|
| 37 |
+
|
| 38 |
+
good results, but this design may lead to performance bottlenecks. As shown in Figure 1(a), centralized federated learning frameworks, such as [36, 58], would distill knowledge from each client from their local data, and then send that knowledge to a centralized server which aggregates that knowledge into a single model. However, the aggregation process would typically merge the information from all client models into a single aggregated model, mostly with a sample merging scheme such as weighted sum. Considering the domain and data distribution differences among clients, the same aggregation process for all clients would result in potential knowledge damage even before the aggregated knowledge gets back to each client. Moreover, such centralized aggregation methods, particularly weighted sum schemes, are also widely used in federated systems like [21, 24, 36], which may not preserve the knowledge of each client well and could possibly hurt the performance of the federated learning framework. Furthermore, centralized federated frameworks heavily depend on the central server and the stability of its connections. If the central server malfunctions or the connections to it are unstable, the training stability of each client can be significantly affected.
|
| 39 |
+
|
| 40 |
+
To address the knowledge damage of centralized aggregation and to reduce the dependence on the centralized server, we propose a decentralized approach to design a federated learning framework. As illustrated in Figure 1(b), to minimize knowledge damage during model aggregation, we eliminate the centralized model aggregation operation. Instead, during the knowledge exchange process, the knowledge that each client would originally send to the server is now directly transmitted to other clients. This way, each client can receive the full knowledge sent by others without damage. Note that the communications between clients do not involve any patient data, which allows us to effectively protect patient privacy. Then, within each client, we design a Mixture of Experts (MoE) approach, treating the knowledge received from other clients and the client's own local knowledge as individual experts, and making decisions collectively using these experts. This decentralized design enables each client to consider its own local data and adaptively select the participation and weights of the experts and also avoids the unnecessary knowledge damage that occurs in centralized systems when aggregating into a unified model. Furthermore, it eliminates the reliance on a central server. If a client or some connections are unstable, the training of our framework would still be effective and without interruption.
|
| 41 |
+
|
| 42 |
+
Our decentralized system is named Decentralized Federated Learning via Mixture of Experts (dFLMoE). In each training round, we first train the local model of each client, which consists of a body and a head. Each client's body processes the input and encodes it into features, which are then passed through the head to obtain the final results. After local training, we send the model heads from each client to
|
| 43 |
+
|
| 44 |
+
all other clients. Considering that a decentralized framework requires model transmission between every pair of clients, transmitting only the lightweight head models would significantly reduce communication costs. After obtaining the heads from other clients, we train an attention-based MoE model, adaptively selecting the most effective combination of heads on each client to obtain the final results. Such client-specific MoE design does not require a structure consistency of the head from each client, which can effectively accommodate the commonly occurring model heterogeneity in practical medical scenarios. Moreover, due to the decentralized nature of the system, when a certain client encounters issues, other clients can still be trained without interruption. If the connection between two clients drops, the knowledge from these clients can still be shared through others, enhancing the robustness of the framework.
|
| 45 |
+
|
| 46 |
+
In summary, our contributions are as follows:
|
| 47 |
+
|
| 48 |
+
- We propose a decentralized federated learning framework named dFLMoE. Our framework directly transmits each client's knowledge to other clients and performs local decision-making on each client, effectively avoiding the knowledge damage caused by centralized server aggregation and eliminating the dependence on a central server.
|
| 49 |
+
- We design a lightweight Mixture of Experts (MoE) module for each client. This local MoE module can adaptively make client-specific decisions using lightweight experts from local and other clients, which can better adapt knowledge from other clients to improve performance and generalizability, without notably increasing communication costs.
|
| 50 |
+
- We validate the effectiveness of our framework on 5 different medical tasks. Our experimental results demonstrate that, on these tasks, under both model homogeneity and heterogeneity settings, our method evidently outperforms the state-of-the-art.
|
| 51 |
+
|
| 52 |
+
# 2. Related Works
|
| 53 |
+
|
| 54 |
+
Centralized federated learning. The general paradigm of federated learning involves clients uploading their local knowledge to a central server for aggregation, which is then distributed back to all clients. Based on the type of aggregation methods, this can be divided into three main categories: local model parameters aggregation [9, 21, 24, 25, 33, 36, 43], soft predictions aggregation [20, 23, 28, 58], and messenger model parameters aggregation [52, 53, 55, 56]. The framework for local model parameters aggregation requires aggregating all or part of the local model parameters at the central server [8, 14, 18, 25, 27, 35, 37, 48]. They require consistent local model structure [2, 7, 34, 51]. Federated learning frameworks that aggregate soft predictions require a public dataset, limiting their application in medical scenarios. Frameworks based on aggregating messenger model parameters insert a homogeneous model into each client and
|
| 55 |
+
|
| 56 |
+
share this model to transfer knowledge. These centralized approaches can lead to knowledge damage during aggregation, and the knowledge would be undermined before it reaches back to each client. Meanwhile, if the central server malfunctions or the connections are unstable, the training stability of each client can be significantly impacted.
|
| 57 |
+
|
| 58 |
+
Decentralized federated learning. Decentralized federated learning, also known as peer-to-peer federated learning [40], addresses the dependency on a central server. Currently, mainstream research in decentralized federated learning focuses on integrating it with blockchain to further enhance security and privacy [1, 30, 38, 59]. However, these works did not address the statistical heterogeneity and system heterogeneity issues in federated learning. Meanwhile, recent work [6, 26, 41, 47] has emerged to improve the performance of decentralized federated learning. They have only decentralized the security aspect without decentralizing the algorithm. Our method adopts a localized knowledge fusion approach, allowing us to adaptively select knowledge based on each client's needs, thereby reducing knowledge damage.
|
| 59 |
+
|
| 60 |
+
# 3. Method
|
| 61 |
+
|
| 62 |
+
# 3.1. Overview
|
| 63 |
+
|
| 64 |
+
We design a decentralized federated learning framework named dFLMoE to address the knowledge damage of centralized aggregation and reduce the dependence on the centralized server. Specifically, we firstly train a local network for client $i$ by its private dataset $D_{i} = \{x_{i},y_{i}\}$ , where $x_{i}$ is the input data in $D_{i}$ , and $y_{i}$ is the label. Then each client shares their learned knowledge $K$ with other clients. Finally, we achieve the final decision through knowledge fusion using Mixture of Experts (MoE). The dFLMoE's paradigm can be expressed as:
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\mathbb {G} = \bigcup_ {i = 0} ^ {N} f _ {i} \left(\theta_ {i}; x _ {i}; \left\{K _ {1}, \dots K _ {i}, \dots K _ {N} \right\}\right), \tag {1}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
where $f_{i}(\theta_{i};x_{i};\{K_{1},\dots K_{i},\dots K_{N}\})$ is the model for client $i$ , where $\theta_{i}$ is the parameters of $f_{i}$ , $x_{i}$ is model input, $K_{i}$ is the knowledge shared by client $i$ , $N$ represents the total number of participating clients, and $\mathbb{G}$ represents the set of $f_{i}$ .
|
| 71 |
+
|
| 72 |
+
The pipeline of dFLMoE is shown in Figure 2. In each client, the model includes the local network and invited experts. The local network is divided into four parts: Body, Feature space transform, Head, and Mixture of Experts (MoE). The body model is used to extract features. Feature space transform module converts the local features into the feature space corresponding to the respective experts (heads). The head model generates the network output using the features and the head module of each client is also shared among all clients, with the heads invited from other clients forming the
|
| 73 |
+
|
| 74 |
+
Mixture of Experts module for each client. We treat each head as an expert and use a Mixture of Experts (MoE) approach to get the final outputs. Our training process consists of 3 steps: a) Local network training, b) Sharing the local head among clients, and c) Mixture of Experts decision. In the rest of the section, we will explain each step in detail.
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
Figure 2. Overview of our proposed dFLMoE framework. For each training phase, we first train the Local network (Body and Head) while freezing the parameters of the MoE module (top right). Then, we send and receive the head to share knowledge among clients (bottom). Finally, we do a Mixture-of-Experts (MoE) decision by training the Feature space transform and MoE network while freezing other parameters including the local body and all the heads. More details can be found in the Sec. 3.
|
| 78 |
+
|
| 79 |
+
# 3.2. Local Network Training
|
| 80 |
+
|
| 81 |
+
At this stage, our goal is to obtain the local network with local knowledge by local data. Therefore, we only train the head and body of the local network and freeze the parameters of the feature space transform and MoE module. For the client $i$ , the local network output $\hat{y}_i^l$ can be defined as:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\dot {y} _ {i} ^ {l} = F _ {h, i} \left(F _ {b, i} \left(x _ {i}\right)\right), \tag {2}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $F_{b,i}(\cdot)$ and $F_{h,i}(\cdot)$ are the body and head of the local network in client $i$ , respectively. The MoE output $\hat{y}_i^m$ can be represented as:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\hat {y} _ {i} ^ {m} = M \left(\bigcup_ {j = 1} ^ {N} F _ {h, j} \left(F T _ {j} \left(F _ {b, i} \left(x _ {i}\right)\right), F _ {b, i} \left(x _ {i}\right)\right)\right), \tag {3}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $M(\cdot)$ is the MoE network and $FT_{j}$ is the feature space transform of Experts $j$ (See Sec. 3.4 for details) in Fig.2, $F_{b,i}(\cdot)$ is the local model body, $\bigcup_{j=1}^{N} F_{h,j}(FT_{j}(F_{b,i}(x_{i})))$ is the set of predictions from each expert (head). $N$ is the total number of participating clients, excluding the local client.
|
| 94 |
+
|
| 95 |
+
Finally, for client $i$ , its training loss function $\mathcal{L}_{ln,i}$ is:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\mathcal {L} _ {\ln , i} = \lambda_ {\text {l o c}} \mathcal {L} _ {\text {l o c}} \left(\hat {y} _ {i} ^ {l}, y _ {i}\right) + \lambda_ {\text {M o E}} \mathcal {L} _ {\text {M o E}} \left(\hat {y} _ {i} ^ {m}, y _ {i}\right). \tag {4}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
$\mathcal{L}_{loc}$ and $\mathcal{L}_{MoE}$ represent the loss functions of the local network and the MoE, respectively. For classification tasks, $\mathcal{L}_{loc}$ and $\mathcal{L}_{MoE}$ are cross-entropy loss. For super-resolution tasks, $\mathcal{L}_{loc}$ and $\mathcal{L}_{MoE}$ are $L1$ loss. And for segmentation tasks, $\mathcal{L}_{loc}$ and $\mathcal{L}_{MoE}$ are Dice and cross-entropy loss. $\lambda_{loc}$ and $\lambda_{MoE}$ are their corresponding weights. $y_{i}$ is the label of local data $x_{i}$ . More details can be found in supplementary materials.
|
| 102 |
+
|
| 103 |
+
# 3.3. Localized knowledge Exchange
|
| 104 |
+
|
| 105 |
+
In the communication stage of existing decentralized federated learning, each of the $N$ participating clients needs to share its local model with the other $N - 1$ clients. Thus, in total there are $N(N - 1)$ times of communication, which is significantly higher than the centralized federated learning, which would only need $2N$ communications for both uploading and downloading. To reduce the communication cost of decentralized federated learning, in the Sharing local head among clients phase, we only share the head of the local model instead of the entire local model. The parameters of the head are several orders of magnitude smaller than those of the local model, which significantly reduces computational costs. Compared to centralized federated learning, this approach does not introduce a significant communication burden. Our experiments demonstrate that, in contrast to sharing the entire local model with each client, our communication overhead is only $0.02\%$ of theirs, while our performance remains comparable to theirs.
|
| 106 |
+
|
| 107 |
+
# 3.4. Mixture of Experts Decision
|
| 108 |
+
|
| 109 |
+
This stage is designed to learn the combination weights of all experts based on the local data. During this stage of training, we fine-tune the parameters of feature space transform and MoE while freezing other parameters. The Mixture of Experts Decision loss function $\mathcal{L}_{MD,i}$ for client $i$ is defined as:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathcal {L} _ {M D, i} = \mathcal {L} _ {M o E} \left(\hat {y} _ {i} ^ {m}, y _ {i}\right), \tag {5}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $\mathcal{L}_{MoE}$ is the loss functions of the MoE. For classification tasks, $\mathcal{L}_{MoE}$ is cross-entropy loss. For super-resolution tasks, $\mathcal{L}_{MoE}$ is $L1$ loss. And for segmentation tasks, $\mathcal{L}_{MoE}$ is Dice and cross-entropy loss. During inference, we directly use the output of the MoE as the final prediction. The experimental results show that dFLMoE can be applied to federated learning scenarios with data heterogeneity, model homogeneity, and model heterogeneity without notably increasing communication costs.
|
| 116 |
+
|
| 117 |
+
Feature space transform in MoE. Before the final Mixture of Experts decision, we design a feature space transform module to transform the local features into the corresponding expert's feature space. As shown in Figure 3, the local
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
Figure 3. The structure of Mixture of Experts and Feature Space Transform. Firstly, the Feature Space Transform converts the local body feature into the feature space corresponding to each expert. Then, each feature obtains the final prediction through the respective expert, and we collect all predictions as the Key $K$ and Value $V$ . Next, we generate the query $Q$ using the local body feature through a linear layer. Finally, we perform the attention mechanism with $Q$ , $K$ , and $V$ to obtain the final predictions.
|
| 121 |
+
|
| 122 |
+
body feature is first transformed into a common space by $W_{com}$ , and then separately transformed into the corresponding expert's feature space through the respective $W_{j}$ . In classification tasks, $W_{com}$ and $W_{j}$ are linear layers. For the segmentation and super-resolution tasks, $W_{com}$ and $W_{j}$ are convolutional layers.
|
| 123 |
+
|
| 124 |
+
After feature space transform, the features in the corresponding space generate the predictions through the respective experts. We utilize the Mixture of Experts (MoE) framework to effectively aggregate these predictions. To enhance the MoE's focus on key experts, inspired by [5, 29], we incorporate a cross-attention mechanism to learn the weights associated with the predictions generated by each client's experts. It is important to note that local models not only capture essential information from their respective local datasets but also tend to inherit biases that may contribute to overfitting. By utilizing the experts from all clients as candidates and employing local features as queries to extract relevant information from the collective pool of experts, we ensure that the selected information reflects the common knowledge shared across clients. We posit that this public information, derived from diverse datasets, possesses greater generalizability, while the local biases are effectively mitigated in the selection process. Consequently, we propose the adoption of a cross-attention design to filter out local biases and enhance the overall generalization capability of the model.
|
| 125 |
+
|
| 126 |
+
The MoE is illustrated in Figure 3. We involve the local body feature denoted as $I$ , and concatenate all the expert predictions as $K$ and $V$ . The feature $I$ obtains the Query feature $Q$ through a linear layer $W$ . The prediction of the MoE $y_{MoE}$ is represented as:
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
y _ {M o E} = \text {A t t e n t i o n} (W (I), K, V), \tag {6}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
Table 1. The results of classification task in different resolutions with homogeneous models or heterogeneous models. The $\mathrm{x}2\downarrow$ , $\mathrm{x}4\downarrow$ , and $\mathrm{x}8\downarrow$ are downsampling half, quarter, and eighth of high-resolution images. We evaluate ACC and MF1 results on the BreaKHis dataset. The larger the better. Bold number means the best. The red boxes represent the single model federated learning and personalized federated learning methods, and their individual clients use the homogeneous model settings (ResNet5). The blue boxes represent the method of using heterogeneous models. The four client models are set to ResNet{17, 11, 8, 5}, respectively. In two different model settings, dFLMoE achieves the best performance.
|
| 133 |
+
|
| 134 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">HR</td><td colspan="2">x2</td><td colspan="2">x4</td><td colspan="2">x8</td><td colspan="2">Average</td></tr><tr><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td></tr><tr><td>Only Local Training</td><td>0.7491</td><td>0.6719</td><td>0.7568</td><td>0.6856</td><td>0.7015</td><td>0.6135</td><td>0.6956</td><td>0.5867</td><td>0.7258</td><td>0.6394</td></tr><tr><td>FedAvg</td><td>0.6067</td><td>0.4621</td><td>0.6667</td><td>0.5874</td><td>0.6178</td><td>0.5194</td><td>0.5799</td><td>0.4616</td><td>0.6178</td><td>0.5076</td></tr><tr><td>SCAFFOLD</td><td>0.6263</td><td>0.4821</td><td>0.7156</td><td>0.6597</td><td>0.6475</td><td>0.5906</td><td>0.5702</td><td>0.4969</td><td>0.6399</td><td>0.5573</td></tr><tr><td>FedProx</td><td>0.6195</td><td>0.4958</td><td>0.6862</td><td>0.6271</td><td>0.6467</td><td>0.5632</td><td>0.4664</td><td>0.3495</td><td>0.6047</td><td>0.5089</td></tr><tr><td>Ditto</td><td>0.7111</td><td>0.6557</td><td>0.7321</td><td>0.6404</td><td>0.7261</td><td>0.6743</td><td>0.6854</td><td>0.5932</td><td>0.7137</td><td>0.6409</td></tr><tr><td>APFL</td><td>0.6412</td><td>0.5848</td><td>0.6033</td><td>0.5626</td><td>0.7301</td><td>0.6468</td><td>0.6973</td><td>0.6166</td><td>0.6680</td><td>0.6027</td></tr><tr><td>FedRep</td><td>0.7663</td><td>0.7165</td><td>0.7513</td><td>0.6869</td><td>0.6849</td><td>0.6151</td><td>0.7254</td><td>0.6229</td><td>0.7320</td><td>0.6604</td></tr><tr><td>LG-FedAvg</td><td>0.7358</td><td>0.6504</td><td>0.7733</td><td>0.6726</td><td>0.7182</td><td>0.6323</td><td>0.7173</td><td>0.6481</td><td>0.7362</td><td>0.6509</td></tr><tr><td>MH-pFLID</td><td>0.8282</td><td>0.7762</td><td>0.8308</td><td>0.7829</td><td>0.8180</td><td>0.7674</td><td>0.7560</td><td>0.6933</td><td>0.8083</td><td>0.7550</td></tr><tr><td>dFLMoE (Ours)</td><td>0.8652</td><td>0.8360</td><td>0.8597</td><td>0.8322</td><td>0.8423</td><td>0.8063</td><td>0.7602</td><td>0.7131</td><td>0.8319</td><td>0.7969</td></tr><tr><td>Only Local Training</td><td>0.7891</td><td>0.7319</td><td>0.8027</td><td>0.7461</td><td>0.7538</td><td>0.6852</td><td>0.6956</td><td>0.5867</td><td>0.7603</td><td>0.6875</td></tr><tr><td>FedMD</td><td>0.7599</td><td>0.7083</td><td>0.8321</td><td>0.7829</td><td>0.7721</td><td>0.7293</td><td>0.6495</td><td>0.5439</td><td>0.7534</td><td>0.6911</td></tr><tr><td>FedDF</td><td>0.7661</td><td>0.7253</td><td>0.8132</td><td>0.7629</td><td>0.7826</td><td>0.7342</td><td>0.6627</td><td>0.5627</td><td>0.7562</td><td>0.6963</td></tr><tr><td>pFedDF</td><td>0.8233</td><td>0.7941</td><td>0.8369</td><td>0.7965</td><td>0.8121</td><td>0.7534</td><td>0.6843</td><td>0.6022</td><td>0.7892</td><td>0.7366</td></tr><tr><td>DS-pFL</td><td>0.7842</td><td>0.7609</td><td>0.8334</td><td>0.7967</td><td>0.7782</td><td>0.7258</td><td>0.6327</td><td>0.5229</td><td>0.7571</td><td>0.7016</td></tr><tr><td>KT-pFL</td><td>0.8424</td><td>0.8133</td><td>0.8441</td><td>0.8011</td><td>0.7801</td><td>0.7325</td><td>0.7032</td><td>0.6219</td><td>0.7925</td><td>0.7422</td></tr><tr><td>MH-pFLID</td><td>0.8929</td><td>0.8658</td><td>0.8992</td><td>0.8787</td><td>0.8661</td><td>0.8327</td><td>0.7751</td><td>0.7130</td><td>0.8583</td><td>0.8226</td></tr><tr><td>dFLMoE (Ours)</td><td>0.9048</td><td>0.8898</td><td>0.9205</td><td>0.9064</td><td>0.9039</td><td>0.8865</td><td>0.8227</td><td>0.7819</td><td>0.8880</td><td>0.8662</td></tr></table>
|
| 135 |
+
|
| 136 |
+

|
| 137 |
+
x8
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
HR
|
| 143 |
+
Bicubic
|
| 144 |
+
FedAvg
|
| 145 |
+
SCAFFOLD
|
| 146 |
+
FedProx
|
| 147 |
+
LG-FedAvg
|
| 148 |
+
FedRep
|
| 149 |
+
Ours
|
| 150 |
+
Ours
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
x4
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
(RCNN)
|
| 157 |
+
(SRResNet)
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
HR
|
| 161 |
+
Bicubic
|
| 162 |
+
FedAvg
|
| 163 |
+
SCAFFOLD
|
| 164 |
+
FedProx
|
| 165 |
+
LG-FedAvg
|
| 166 |
+
FedRep
|
| 167 |
+
Ours
|
| 168 |
+
(RCNN)
|
| 169 |
+
Ours
|
| 170 |
+
(SRResNet)
|
| 171 |
+
Figure 4. Visualized comparison of Federated Learning in medical image super-resolution. We randomly select two samples from different resolutions (x8↓ and x4↓) to form the visualization. Super-resolution results for FedAVG, SCAFFOLD, FedProx, LG-FedAvg, FedRep, our method dFLMoE (RCNN) and dFLMoE (SRResNet). Our framework can recover more details.
|
| 172 |
+
|
| 173 |
+
where Attention is the attention mechanism function [46]. More details can be found in supplementary materials.
|
| 174 |
+
|
| 175 |
+
# 4. Experiments
|
| 176 |
+
|
| 177 |
+
# 4.1. Tasks and Datasets
|
| 178 |
+
|
| 179 |
+
We verify the effectiveness of dFLMoE on 5 non-IID tasks.
|
| 180 |
+
|
| 181 |
+
A. Medical image classification (different resolution). We use the Breast Cancer Histopathological Image Database (BreaKHis) [45]. We perform $\mathrm{x2\downarrow}$ , $\mathrm{x4\downarrow}$ , and $\mathrm{x8\downarrow}$ downsam
|
| 182 |
+
|
| 183 |
+
pling on the high-resolution images [49]. Each resolution of medical images is treated as a client, resulting in four clients in total. The dataset for each client was randomly divided into training and testing sets at a ratio of 7:3, following previous work. For the same image with different resolutions, they will be used in either the training set or the testing set. For the model homogeneous framework, we employed ResNet{5}. For the model heterogeneous framework, we employed ResNet{17, 11, 8, 5}.
|
| 184 |
+
|
| 185 |
+
B. Medical image super-resolution. We use BreaKHis
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
Figure 5. Visualized comparison of Federated Learning in medical image segmentation. We randomly select three samples from different clients to form the visualization. (a-k) Segmentation results for FedAVG, SCAFFOLD, FedProx, Ditto, APFL, LG-FedAvg, FedRep, FedSM, LC-Fed, MH-FLID and our method dFLMoE; (l) Ground truths (denoted as 'GT').
|
| 189 |
+
|
| 190 |
+
Table 2. The results of super-resolution with homogeneous models or heterogeneous models. The $\mathrm{x8\uparrow}$ , $\mathrm{x4\uparrow}$ , and $\mathrm{x2\uparrow}$ are two times, four times, and eight times super-resolution for downsampling eighth, quarter, and half of high-resolution images. We evaluate PSNR and SSIM results on the BreaKHis dataset. The larger the better. The red boxes represent the method of individual clients adopting the homogeneous model settings (RCNN). The blue boxes represent the method of using heterogeneous models. The three client models are set to SRResNet{18, 12, 6}, respectively. In two different model settings, dFLMoE achieves the best performance.
|
| 191 |
+
|
| 192 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">x8↑</td><td colspan="2">x4↑</td><td colspan="2">x2↑</td><td colspan="2">Average</td></tr><tr><td>PSNR↑</td><td>SSIM↑</td><td>PSNR↑</td><td>SSIM↑</td><td>PSNR↑</td><td>SSIM↑</td><td>PSNR↑</td><td>SSIM↑</td></tr><tr><td>Bicubic</td><td>20.75</td><td>0.4394</td><td>23.21</td><td>0.6305</td><td>26.60</td><td>0.9151</td><td>23.52</td><td>0.6617</td></tr><tr><td>Only Local Training</td><td>21.12</td><td>0.4872</td><td>24.04</td><td>0.6634</td><td>28.36</td><td>0.8631</td><td>24.51</td><td>0.6712</td></tr><tr><td>FedAvg</td><td>22.00</td><td>0.6572</td><td>24.65</td><td>0.6802</td><td>26.46</td><td>0.8188</td><td>24.37</td><td>0.7187</td></tr><tr><td>SCAFFCOLD</td><td>21.33</td><td>0.5633</td><td>24.47</td><td>0.6817</td><td>28.61</td><td>0.8398</td><td>24.80</td><td>0.6949</td></tr><tr><td>FedProx</td><td>21.77</td><td>0.6254</td><td>23.92</td><td>0.6791</td><td>27.60</td><td>0.8274</td><td>24.43</td><td>0.7106</td></tr><tr><td>LG-FedAvg</td><td>21.50</td><td>0.4461</td><td>23.63</td><td>0.6789</td><td>27.02</td><td>0.8352</td><td>24.05</td><td>0.6534</td></tr><tr><td>FedRep</td><td>22.01</td><td>0.6170</td><td>24.73</td><td>0.6999</td><td>29.72</td><td>0.8964</td><td>25.49</td><td>0.7378</td></tr><tr><td>Ours</td><td>23.43</td><td>0.6671</td><td>27.59</td><td>0.8272</td><td>34.82</td><td>0.9605</td><td>28.61</td><td>0.8183</td></tr><tr><td>Only Local Training</td><td>21.76</td><td>0.5141</td><td>25.23</td><td>0.7423</td><td>29.31</td><td>0.9022</td><td>25.43</td><td>0.7195</td></tr><tr><td>Ours</td><td>23.94</td><td>0.6929</td><td>28.08</td><td>0.8436</td><td>35.87</td><td>0.9686</td><td>29.30</td><td>0.8350</td></tr></table>
|
| 193 |
+
|
| 194 |
+
dataset [45]. We perform $\mathrm{x}2\downarrow$ , $\mathrm{x}4\downarrow$ , and $\mathrm{x}8\downarrow$ downsampling on the high-resolution images [49]. Each downsampled resolution of medical images is treated as a client, resulting in three clients in total. We used the RCNN [12] for the model heterogeneous framework. We used SRResNet{6, 12, 18} [22] for the model heterogeneous framework.
|
| 195 |
+
|
| 196 |
+
C. Medical time-series classification. We used the Sleep-EDF dataset [13] for the time-series classification task of three clients under non-IID distribution. For the model homogeneous framework, we employed TCN. For the model heterogeneous framework, three clients use the TCN [3], Transformer [57] and RNN [50].
|
| 197 |
+
D. Medical image classification (different label distributions). This task includes a breast cancer classification task and an ocular disease recognition task. Similar to previous work [52], we also designed eight clients, each using a different model. They are ResNet [15], ShuffleNetV2 [32], ResNeXt [54], SqueezeNet [19], SENet [16], MobileNetV2 [42], DenseNet [17], and VGG [44]. We apply the same non-IID label distribution method as before to the BreaKHis
|
| 198 |
+
|
| 199 |
+
Table 3. The results of time-series classification with homogeneous models or heterogeneous models. We evaluate ACC and MF1 results on the Sleep-EDF dataset. The red boxes represent the method of individual clients adopting the homogeneous model settings (TCN). The blue boxes represent the method of using heterogeneous models. The three client models are TCN, Transformer, and RNN, respectively. In two different model settings, dFLMoE achieves the best performance.
|
| 200 |
+
|
| 201 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Client 1</td><td colspan="2">Client 2</td><td colspan="2">Client 3</td><td colspan="2">Average</td></tr><tr><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td></tr><tr><td>Only Local Training</td><td>0.9073</td><td>0.8757</td><td>0.8012</td><td>0.7933</td><td>0.7791</td><td>0.7289</td><td>0.8292</td><td>0.7993</td></tr><tr><td>FedAvg</td><td>0.8357</td><td>0.7281</td><td>0.7719</td><td>0.7726</td><td>0.7418</td><td>0.6083</td><td>0.7831</td><td>0.7030</td></tr><tr><td>SCAFFOLD</td><td>0.8792</td><td>0.8176</td><td>0.8473</td><td>0.8494</td><td>0.7575</td><td>0.6242</td><td>0.8280</td><td>0.7637</td></tr><tr><td>FedProx</td><td>0.8541</td><td>0.7668</td><td>0.8154</td><td>0.8162</td><td>0.7804</td><td>0.7179</td><td>0.8166</td><td>0.7670</td></tr><tr><td>FedRep</td><td>0.8934</td><td>0.8633</td><td>0.8367</td><td>0.8221</td><td>0.7782</td><td>0.7341</td><td>0.8361</td><td>0.8065</td></tr><tr><td>LG-FedAvg</td><td>0.8797</td><td>0.7613</td><td>0.8532</td><td>0.8568</td><td>0.7656</td><td>0.6954</td><td>0.8328</td><td>0.7712</td></tr><tr><td>MH-pFLID</td><td>0.9392</td><td>0.9117</td><td>0.8463</td><td>0.8321</td><td>0.8244</td><td>0.7973</td><td>0.8700</td><td>0.8470</td></tr><tr><td>dFLMoE(Ours)</td><td>0.9470</td><td>0.9303</td><td>0.9201</td><td>0.9210</td><td>0.8451</td><td>0.8123</td><td>0.9041</td><td>0.8879</td></tr><tr><td>Only Local Training</td><td>0.9073</td><td>0.8757</td><td>0.8053</td><td>0.8001</td><td>0.8012</td><td>0.7263</td><td>0.8379</td><td>0.8007</td></tr><tr><td>FedMD</td><td>0.9334</td><td>0.9225</td><td>0.7934</td><td>0.7966</td><td>0.793</td><td>0.7072</td><td>0.8399</td><td>0.8088</td></tr><tr><td>FedDF</td><td>0.9146</td><td>0.8893</td><td>0.7988</td><td>0.8042</td><td>0.7881</td><td>0.6855</td><td>0.8338</td><td>0.7930</td></tr><tr><td>pFedDF</td><td>0.9173</td><td>0.8957</td><td>0.827</td><td>0.8309</td><td>0.8137</td><td>0.7713</td><td>0.8527</td><td>0.8326</td></tr><tr><td>DS-pFL</td><td>0.9133</td><td>0.9033</td><td>0.8253</td><td>0.8301</td><td>0.8042</td><td>0.7539</td><td>0.8476</td><td>0.8291</td></tr><tr><td>KT-pFL</td><td>0.924</td><td>0.9089</td><td>0.8419</td><td>0.8466</td><td>0.8204</td><td>0.7722</td><td>0.8621</td><td>0.8426</td></tr><tr><td>MH-pFLID</td><td>0.9439</td><td>0.9248</td><td>0.8725</td><td>0.876</td><td>0.824</td><td>0.7773</td><td>0.8801</td><td>0.8594</td></tr><tr><td>dFLMoE(Ours)</td><td>0.9484</td><td>0.9319</td><td>0.9308</td><td>0.9319</td><td>0.8617</td><td>0.8319</td><td>0.9136</td><td>0.8986</td></tr></table>
|
| 202 |
+
|
| 203 |
+
and ODIR-5K datasets [4] across 8 clients. Specifically, the data distribution varies among clients.
|
| 204 |
+
|
| 205 |
+
E. Medical image segmentation. Here, we focus on polyp segmentation [11]. The dataset consists of endoscopic images collected and annotated from four centers, with each center's dataset treats as a separate client. We employed Unet [39] for the model homogeneous framework. For the model heterogeneous framework, each clients adopted Unet++ [60], Unet [39], Res-Unet [10], FCN [31]., respectively.
|
| 206 |
+
|
| 207 |
+
# 4.2. Results
|
| 208 |
+
|
| 209 |
+
Medical image classification (different resolutions). In this task, we compare dFLMoE with the baseline framework in two different model settings. For the model homogeneous framework, all frameworks use ResNet5. For the model heterogeneous framework, we use the ResNet family. As in previous work, we use the ResNet family for the model heterogeneous framework. Clients with low-resolution images
|
| 210 |
+
|
| 211 |
+
Table 4. The results of Image Classification Task with Different Label Distributions. This task includes breast cancer classification and Ocular disease recognition. We evaluate ACC and MF1 results in this task. The larger the better. Bold number means the best. dFLMoE has the best performance.
|
| 212 |
+
|
| 213 |
+
<table><tr><td colspan="17">Breast Cancer Classification</td><td></td><td></td></tr><tr><td rowspan="2">Method</td><td colspan="2">ResNet</td><td colspan="2">shufflenetv2</td><td colspan="2">ResNeXt</td><td colspan="2">squeezeNet</td><td colspan="2">SENet</td><td colspan="2">MobileNet</td><td colspan="2">DenseNet</td><td colspan="2">VGG</td><td>Average</td><td></td></tr><tr><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td></tr><tr><td>Only Local Training</td><td>0.59</td><td>0.455</td><td>0.845</td><td>0.8412</td><td>0.665</td><td>0.5519</td><td>0.84</td><td>0.7919</td><td>0.875</td><td>0.849</td><td>0.755</td><td>0.5752</td><td>0.855</td><td>0.6884</td><td>0.875</td><td>0.8515</td><td>0.7875</td><td>0.7005</td></tr><tr><td>FedMD</td><td>0.692</td><td>0.5721</td><td>0.823</td><td>0.8027</td><td>0.704</td><td>0.6087</td><td>0.875</td><td>0.8544</td><td>0.907</td><td>0.8745</td><td>0.762</td><td>0.6627</td><td>0.835</td><td>0.6493</td><td>0.842</td><td>0.8001</td><td>0.8050</td><td>0.7281</td></tr><tr><td>FedDF</td><td>0.721</td><td>0.5949</td><td>0.817</td><td>0.8094</td><td>0.723</td><td>0.6221</td><td>0.893</td><td>0.8735</td><td>0.935</td><td>0.9021</td><td>0.757</td><td>0.6609</td><td>0.847</td><td>0.6819</td><td>0.833</td><td>0.7826</td><td>0.8158</td><td>0.7409</td></tr><tr><td>pFedDF</td><td>0.755</td><td>0.6536</td><td>0.853</td><td>0.8256</td><td>0.741</td><td>0.6237</td><td>0.894</td><td>0.8742</td><td>0.935</td><td>0.9021</td><td>0.796</td><td>0.7219</td><td>0.879</td><td>0.7095</td><td>0.874</td><td>0.8521</td><td>0.8409</td><td>0.7703</td></tr><tr><td>DS-pFL</td><td>0.715</td><td>0.6099</td><td>0.792</td><td>0.7734</td><td>0.765</td><td>0.6547</td><td>0.899</td><td>0.8792</td><td>0.935</td><td>0.9021</td><td>0.794</td><td>0.7331</td><td>0.853</td><td>0.6691</td><td>0.851</td><td>0.8266</td><td>0.8255</td><td>0.7560</td></tr><tr><td>KT-pFL</td><td>0.765</td><td>0.6733</td><td>0.87</td><td>0.8331</td><td>0.755</td><td>0.6432</td><td>0.885</td><td>0.8621</td><td>0.935</td><td>0.9021</td><td>0.78</td><td>0.6931</td><td>0.865</td><td>0.6819</td><td>0.905</td><td>0.9023</td><td>0.8450</td><td>0.7739</td></tr><tr><td>MH-pFLID</td><td>0.805</td><td>0.6427</td><td>0.945</td><td>0.9394</td><td>0.82</td><td>0.7604</td><td>0.963</td><td>0.9457</td><td>0.975</td><td>0.9709</td><td>0.815</td><td>0.7755</td><td>0.895</td><td>0.7287</td><td>0.995</td><td>0.9583</td><td>0.9016</td><td>0.8402</td></tr><tr><td>pFLMoE(Ours)</td><td>0.875</td><td>0.8745</td><td>0.975</td><td>0.9749</td><td>0.825</td><td>0.7951</td><td>0.945</td><td>0.8934</td><td>0.965</td><td>0.9458</td><td>0.805</td><td>0.7428</td><td>0.945</td><td>0.8611</td><td>0.995</td><td>0.9936</td><td>0.9163</td><td>0.8852</td></tr><tr><td colspan="18">Ocular Disease Recognition</td><td></td></tr><tr><td rowspan="2">Method</td><td colspan="2">ResNet</td><td colspan="2">shufflenetv2</td><td colspan="2">ResNeXt</td><td colspan="2">squeezeNet</td><td colspan="2">SENet</td><td colspan="2">MobileNet</td><td colspan="2">DenseNet</td><td colspan="2">VGG</td><td>Average</td><td></td></tr><tr><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td>ACC↑</td><td>MF1↑</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Only Local Training</td><td>0.6813</td><td>0.5607</td><td>0.6438</td><td>0.6406</td><td>0.5063</td><td>0.5019</td><td>0.5625</td><td>0.3705</td><td>0.8562</td><td>0.8532</td><td>0.5813</td><td>0.4711</td><td>0.5563</td><td>0.5061</td><td>0.8938</td><td>0.7273</td><td>0.6602</td><td>0.5789</td></tr><tr><td>FedMD</td><td>0.5375</td><td>0.2945</td><td>0.7375</td><td>0.7065</td><td>0.475</td><td>0.4017</td><td>0.5375</td><td>0.1748</td><td>0.5375</td><td>0.4558</td><td>0.6188</td><td>0.4245</td><td>0.6438</td><td>0.3916</td><td>0.8562</td><td>0.6114</td><td>0.6180</td><td>0.4326</td></tr><tr><td>FedDF</td><td>0.6938</td><td>0.6413</td><td>0.7688</td><td>0.7609</td><td>0.5437</td><td>0.5397</td><td>0.5688</td><td>0.1813</td><td>0.6313</td><td>0.6288</td><td>0.5375</td><td>0.5128</td><td>0.5563</td><td>0.5312</td><td>0.8938</td><td>0.5254</td><td>0.6493</td><td>0.5402</td></tr><tr><td>pFedDF</td><td>0.7312</td><td>0.641</td><td>0.7438</td><td>0.7324</td><td>0.6062</td><td>0.5443</td><td>0.5437</td><td>0.4536</td><td>0.6562</td><td>0.4611</td><td>0.5875</td><td>0.5095</td><td>0.5437</td><td>0.518</td><td>0.9062</td><td>0.7708</td><td>0.6648</td><td>0.5788</td></tr><tr><td>DS-pFL</td><td>0.7563</td><td>0.6567</td><td>0.7625</td><td>0.739</td><td>0.575</td><td>0.5652</td><td>0.5813</td><td>0.3874</td><td>0.8625</td><td>0.8625</td><td>0.5875</td><td>0.5299</td><td>0.5875</td><td>0.5394</td><td>0.8688</td><td>0.6018</td><td>0.6977</td><td>0.6102</td></tr><tr><td>KT-pFL</td><td>0.7625</td><td>0.7144</td><td>0.775</td><td>0.7566</td><td>0.5125</td><td>0.4182</td><td>0.5688</td><td>0.3877</td><td>0.85</td><td>0.8498</td><td>0.6062</td><td>0.5078</td><td>0.625</td><td>0.4726</td><td>0.9187</td><td>0.8014</td><td>0.7023</td><td>0.6136</td></tr><tr><td>MH-pFLID</td><td>0.775</td><td>0.6899</td><td>0.8188</td><td>0.8126</td><td>0.635</td><td>0.5652</td><td>0.5625</td><td>0.4487</td><td>0.9125</td><td>0.9114</td><td>0.6125</td><td>0.5044</td><td>0.6188</td><td>0.5756</td><td>0.9125</td><td>0.8155</td><td>0.7310</td><td>0.6654</td></tr><tr><td>dFLMoE(Ours)</td><td>0.8052</td><td>0.7354</td><td>0.8313</td><td>0.8277</td><td>0.6562</td><td>0.6552</td><td>0.6313</td><td>0.4333</td><td>0.9625</td><td>0.9625</td><td>0.6313</td><td>0.5202</td><td>0.6500</td><td>0.5833</td><td>0.9500</td><td>0.8529</td><td>0.7647</td><td>0.6962</td></tr></table>
|
| 214 |
+
|
| 215 |
+
Table 5. For the medical image segmentation task, we evaluate the Dice result on Polyp dataset. The larger the better. **Bold number** means the best. The red boxes represent the method of using homogeneous models. Their clients use the Unet. The blue boxes represent the method of using heterogeneous models in each client. The four client models are set to Unet++, Unet, ResUnet, and FCN, respectively. dFLMoE achieves the best segmentation results.
|
| 216 |
+
|
| 217 |
+
<table><tr><td>Method</td><td>Client1</td><td>Client2</td><td>Client3</td><td>Client4</td><td>Average</td></tr><tr><td>FedAvg</td><td>0.5249</td><td>0.4205</td><td>0.5676</td><td>0.5500</td><td>0.5158</td></tr><tr><td>SCAFFOLD</td><td>0.5244</td><td>0.3591</td><td>0.5935</td><td>0.5713</td><td>0.5121</td></tr><tr><td>FedProx</td><td>0.5529</td><td>0.4674</td><td>0.5403</td><td>0.6301</td><td>0.5477</td></tr><tr><td>Ditto</td><td>0.5720</td><td>0.4644</td><td>0.6648</td><td>0.6416</td><td>0.5857</td></tr><tr><td>APFL</td><td>0.6120</td><td>0.5095</td><td>0.6333</td><td>0.5892</td><td>0.5860</td></tr><tr><td>LG-FedAvg</td><td>0.6053</td><td>0.5062</td><td>0.7371</td><td>0.5596</td><td>0.6021</td></tr><tr><td>FedRep</td><td>0.5809</td><td>0.3106</td><td>0.7088</td><td>0.7023</td><td>0.5757</td></tr><tr><td>FedSM</td><td>0.6894</td><td>0.6278</td><td>0.8021</td><td>0.7391</td><td>0.7146</td></tr><tr><td>LC-Fed</td><td>0.6233</td><td>0.4982</td><td>0.8217</td><td>0.7654</td><td>0.6772</td></tr><tr><td>dFLMoE (Ours)</td><td>0.7918</td><td>0.6882</td><td>0.8808</td><td>0.7644</td><td>0.7813</td></tr><tr><td>Only Local Training</td><td>0.7049</td><td>0.4906</td><td>0.8079</td><td>0.7555</td><td>0.6897</td></tr><tr><td>MH-pFLID</td><td>0.7565</td><td>0.6830</td><td>0.8644</td><td>0.7644</td><td>0.7671</td></tr><tr><td>dFLMoE (Ours)</td><td>0.7945</td><td>0.6859</td><td>0.8709</td><td>0.7710</td><td>0.7806</td></tr></table>
|
| 218 |
+
|
| 219 |
+
employ shallower models, while clients with high-resolution images use more complex models. In Tab. 1, experimental results show that in both model settings, dFLMoE achieves the best performance. This indicates that dFLMoE can effectively integrate knowledge from both homogeneous or heterogeneous models, thereby enhancing the performance of local models.
|
| 220 |
+
|
| 221 |
+
Medical image super-resolution. This task involves reconstructing different low-resolution medical images into high-resolution images. We consider all images of the same resolution as a single client. In this task, we use the RCNN for the model homogeneous framework and the SResNet family for the model homogeneous framework. As shown in Tab. 2, dFLMoE achieves the best results. Moreover, as shown in Figure 4, our framework can recover more details.
|
| 222 |
+
|
| 223 |
+
Time-series classification. The experimental results in Tab. 3 show that dFLMoE achieves the best results under two
|
| 224 |
+
|
| 225 |
+
different model settings. This further demonstrates the superiority of dFLMoE in federated learning of homogeneous and heterogeneous models.
|
| 226 |
+
|
| 227 |
+
Medical image classification (different label distributions). In Tab. 4, the experimental results for the medical image classification task with different label distributions, where each client uses heterogeneous models, show that dFLMoE achieves the optimal results. This demonstrates that, compared to heterogeneous federated learning methods, the Mixture of Experts approach of dFLMoE can more effectively fuse knowledge from other clients to make decisions.
|
| 228 |
+
|
| 229 |
+
Medical image segmentation. We validate the effectiveness of dFLMoE in medical image segmentation tasks. Tab. 5 presents the results of federated learning in the segmentation task, demonstrating that dFLMoE achieves the best experimental outcomes under two different model settings. The experimental results not only demonstrate that dFLMoE effectively enhances local model performance, but also prove its applicability to various medical tasks. Meanwhile, the visualization results in Figure 5 show that the segmentation results of dFLMoE are closer to ground truth.
|
| 230 |
+
|
| 231 |
+
Connection robustness. As shown in Tab. 7, We design two disconnect experiments for medical image classification (different resolutions) and medical image segmentation tasks to verify that dFLMoE can still help improve local model training performance in disconnect scenarios. Communication disconnect refers to randomly dropping clients' upload or download processes. Client disconnect means that the corresponding client does not participate in the federated learning. The experimental results are shown in Tab. 6. In the communication disconnect experiment, the results show that compared to centralized solutions, our method experiences lower performance degradation as the dropout rate increases. When the disconnect rate reaches $75\%$ , the cen
|
| 232 |
+
|
| 233 |
+
Table 6. The disconnect experiment of dFLMoE and MH-pFLID (centralized Federated Learning) in medical image classification (different resolutions) and medical image segmentation tasks. In the communication disconnect, we randomly disconnect each client's upload or download operations with the server. At a disconnect rate of $50\%$ , centralized federated learning ensures that each client maintains at least one upload or download operation. At a dropout rate of $75\%$ , it becomes only local training. In the client disconnect, we directly remove certain clients during the federated learning process. For example, a disconnect rate of $25\%$ indicates that only three clients participate in the federated learning, while "None (3 clients)" refers to the performance of three clients out of four. dFLMoE shows less performance degradation compared to the centralized approach in disconnect scenarios.
|
| 234 |
+
|
| 235 |
+
<table><tr><td colspan="6">Communication disconnect</td></tr><tr><td rowspan="2">Task</td><td rowspan="2">Method</td><td>None</td><td>25%</td><td>50%</td><td>75%</td></tr><tr><td>ACC</td><td>ACC</td><td>ACC</td><td>ACC</td></tr><tr><td rowspan="2">Classification</td><td>dFLMoE (Ours)</td><td>0.8880</td><td>0.8798</td><td>0.8474</td><td>0.8011</td></tr><tr><td>MH-pFLID</td><td>0.8583</td><td>0.8393</td><td>0.7687</td><td>0.7258</td></tr><tr><td colspan="2"></td><td>Dice</td><td>Dice</td><td>Dice</td><td>Dice</td></tr><tr><td rowspan="2">Segmentation</td><td>dFLMoE (Ours)</td><td>0.7860</td><td>0.7789</td><td>0.7423</td><td>0.7211</td></tr><tr><td>MH-pFLID</td><td>0.7671</td><td>0.7641</td><td>0.7043</td><td>0.6897</td></tr><tr><td colspan="6">Client disconnect</td></tr><tr><td rowspan="2">Task</td><td rowspan="2">Method</td><td>None(3 client)</td><td>25%</td><td>None(2 client)</td><td>50%</td></tr><tr><td>ACC</td><td>ACC</td><td>ACC</td><td>ACC</td></tr><tr><td rowspan="2">Classification</td><td>dFLMoE (Ours)</td><td>0.8771</td><td>0.8633</td><td>0.8638</td><td>0.8474</td></tr><tr><td>MH-pFLID</td><td>0.8447</td><td>0.8193</td><td>0.8340</td><td>0.8087</td></tr><tr><td colspan="2"></td><td>Dice</td><td>Dice</td><td>Dice</td><td>Dice</td></tr><tr><td rowspan="2">Segmentation</td><td>dFLMoE (Ours)</td><td>0.7873</td><td>0.7642</td><td>0.7838</td><td>0.7446</td></tr><tr><td>MH-pFLID</td><td>0.7681</td><td>0.7359</td><td>0.7737</td><td>0.7154</td></tr></table>
|
| 236 |
+
|
| 237 |
+
Table 7. Difference between communication disconnect and client disconnect incentralized and decentralized federated learning.
|
| 238 |
+
|
| 239 |
+
<table><tr><td></td><td>Centralized</td><td>Decentralized</td></tr><tr><td>Communication disconnect</td><td>Randomly disconnect each client's up-load or download operations with the central server.</td><td>Randomly disconnect the upload or download operations between each client.</td></tr><tr><td>Client disconnect</td><td colspan="2">Remove the corresponding clients during the federated learning process.</td></tr></table>
|
| 240 |
+
|
| 241 |
+
Table 8. In heterogeneous model settings, we compare the impact of expert parameter quantities on performance in medical image segmentation, time-series classification, breast cancer classification (with different label distributions), and medical super-resolution tasks. #Params represents the average amount of parameters a client needs to share in one communication. The experimental results show that using the entire local model as experts leads to limited performance improvement.
|
| 242 |
+
|
| 243 |
+
<table><tr><td rowspan="2">Expert</td><td colspan="2">Segmentation</td><td colspan="2">Time-series</td></tr><tr><td>#Params(M)</td><td>Dice</td><td>#Params(M)</td><td>ACC</td></tr><tr><td>Head</td><td>0.001</td><td>0.7806</td><td>0.002</td><td>0.9136</td></tr><tr><td>Entire local model</td><td>24.015</td><td>0.7921</td><td>1.181</td><td>0.9122</td></tr><tr><td rowspan="2">Expert</td><td colspan="2">Breast Cancer</td><td colspan="2">Super-resolution</td></tr><tr><td>#Params(M)</td><td>ACC</td><td>#Params(M)</td><td>PSNR</td></tr><tr><td>Head</td><td>0.004</td><td>0.9163</td><td>0.001</td><td>29.30</td></tr><tr><td>Entire local model</td><td>9.763</td><td>0.9077</td><td>7.321</td><td>29.43</td></tr></table>
|
| 244 |
+
|
| 245 |
+
tralized solution performs similarly to only local training, while our approach allows for knowledge transfer, thereby enhancing local model performance. In the client disconnect experiment, dFLMoE still shows less performance degrada
|
| 246 |
+
|
| 247 |
+
Table 9. The ablation experiments of dFLMoE. We remove some essential modules to verify the effectiveness of each module. We perform experiments on Time-series classification, medical image super-resolution, and segmentation tasks. We observe that though those experiments outperform centralized methods, they suffer different levels of performance decrease. (MoE: Mixture of Experts; FST: Feature space transform)
|
| 248 |
+
|
| 249 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">Time-series</td><td colspan="2">Super-resolution</td><td>Segmentation</td></tr><tr><td>ACC↑</td><td>MF1↑</td><td>PSNR↑</td><td>SSIM↑</td><td>Dice↑</td></tr><tr><td>dFLMoE (Ours)</td><td>0.9041</td><td>0.8879</td><td>29.30</td><td>0.8350</td><td>0.7860</td></tr><tr><td>w/o MoE module</td><td>0.8731</td><td>0.8533</td><td>28.65</td><td>0.8234</td><td>0.7344</td></tr><tr><td>w/o FST module</td><td>0.8812</td><td>0.8681</td><td>28.44</td><td>0.8261</td><td>0.7421</td></tr><tr><td>w/ centralized MoE& FST</td><td>0.8609</td><td>0.8347</td><td>27.46</td><td>0.8199</td><td>0.6625</td></tr><tr><td>w/ aggregated head</td><td>0.8361</td><td>0.8065</td><td>26.07</td><td>0.7891</td><td>0.5893</td></tr></table>
|
| 250 |
+
|
| 251 |
+
tion compared to the centralized approach.
|
| 252 |
+
|
| 253 |
+
Experts number of parameters. As shown in Tab. 8, under heterogeneous model settings, we compare the impact of expert parameter quantity on performance in four tasks. The experimental results show that using the entire model as an expert leads to limited performance improvements, but significantly increases the average parameter quantity that each client needs to share, resulting in a higher communication burden.
|
| 254 |
+
|
| 255 |
+
Ablation studies. To verify the effectiveness of the proposed components in dFLMoE, a comparison between dFLMoE and its four components on time-series classification, super-resolution, and segmentation tasks is given in Tab. 9. The four components are as follows: (1) w/o MoE: we replace our designed MoE with the original MoE. (2) w/o FST indicates that we delete the feature space transform module in the local network. (3) w/ centralized MoE& FST or w/ aggregated head means that all clients' MoE and FST or head parameters are uploaded to the central server for aggregation. Experimental results show that our designed MoE and FT modules effectively integrate knowledge from various clients. Compared to centralized aggregation, our decentralized approach better utilizes knowledge from other clients to enhance local model performance.
|
| 256 |
+
|
| 257 |
+
# 5. Conclusions
|
| 258 |
+
|
| 259 |
+
Centralized Federated Learning could lead to knowledge damage during aggregation, and the knowledge would be undermined before it reaches back to each client. It also creates a dependency on the central server, which may affect training stability if the server malfunctions or connections are unstable. We design a decentralized federated learning framework named dFLMoE to address the issues of centralized Federated Learning. dFLMoE shares each client's head model as an expert with other clients and uses the MoE approach to fuse the knowledge from these experts to make the final decision. We demonstrate the effectiveness of our framework in 5 Non-IID medical tasks under two model settings and achieves state-of-the-art performance.
|
| 260 |
+
|
| 261 |
+
# References
|
| 262 |
+
|
| 263 |
+
[1] Vidushi Agarwal, Shruti Mishra, and Sujata Pal. Towards a sustainable blockchain: A peer-to-peer federated learning based approach. ACM Transactions on Internet Technology, 2024. 3
|
| 264 |
+
[2] Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818, 2019. 2
|
| 265 |
+
[3] Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018. 6
|
| 266 |
+
[4] Amit Bhati, Neha Gour, Pritee Khanna, and Aparajita Ojha. Discriminative kernel convolution network for multi-label ophthalmic disease detection on imbalanced fundus image dataset. Computers in Biology and Medicine, 153:106519, 2023. 6
|
| 267 |
+
[5] Gal Blecher and Shai Fine. Moeatt: A deep mixture of experts model using attention-based routing gate. In 2023 International Conference on Machine Learning and Applications (ICMLA), pages 1018-1024. IEEE, 2023. 4
|
| 268 |
+
[6] Qian Chen, Zilong Wang, Yilin Zhou, Jiawei Chen, Dan Xiao, and Xiaodong Lin. Cfl: Cluster federated learning in large-scale peer-to-peer networks. In International Conference on Information Security, pages 464-472. Springer, 2022. 3
|
| 269 |
+
[7] Yiqiang Chen, Xin Qin, Jindong Wang, Chaohui Yu, and Wen Gao. Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems, 35(4):83-93, 2020. 2
|
| 270 |
+
[8] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting shared representations for personalized federated learning. In ICML, pages 2089-2099. PMLR, 2021. 1, 2
|
| 271 |
+
[9] Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461, 2020. 1, 2
|
| 272 |
+
[10] Foivos I Diakogiannis, François Waldner, Peter Caccetta, and Chen Wu. Resunet-a: A deep learning framework for semantic segmentation of remotely sensed data. *ISPRS Journal of Photogrammetry and Remote Sensing*, 162:94–114, 2020. 6
|
| 273 |
+
[11] Bo Dong, Wenhai Wang, Deng-Ping Fan, Jinpeng Li, Huazhu Fu, and Ling Shao. Polyp-pvt: Polyp segmentation with pyramid vision transformers. arXiv preprint arXiv:2108.06932, 2021. 6
|
| 274 |
+
[12] Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Learning a deep convolutional network for image superresolution. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13, pages 184-199. Springer, 2014. 6
|
| 275 |
+
[13] Ary L Goldberger, Luis AN Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung-Kang Peng, and H Eugene Stanley. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation, 101(23):e215-e220, 2000. 6
|
| 276 |
+
|
| 277 |
+
[14] Filip Hanzely and Peter Richtárik. Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516, 2020. 2
|
| 278 |
+
[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015. 6
|
| 279 |
+
[16] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132-7141, 2018. 6
|
| 280 |
+
[17] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017. 6
|
| 281 |
+
[18] Yutao Huang, Lingyang Chu, Zirui Zhou, Lanjun Wang, Jiangchuan Liu, Jian Pei, and Yong Zhang. Personalized cross-silo federated learning on non-iid data. In AAAI, pages 7865-7873, 2021. 2
|
| 282 |
+
[19] Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: Alexnet-level accuracy with 50x fewer parameters and; 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016. 6
|
| 283 |
+
[20] Sohei Itahara, Takayuki Nishio, Yusuke Koda, Masahiro Morikura, and Koji Yamamoto. Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data. IEEE Transactions on Mobile Computing, 22(1):191-205, 2023. 1, 2
|
| 284 |
+
[21] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In ICML, pages 5132-5143. PMLR, 2020. 2
|
| 285 |
+
[22] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photorealistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681-4690, 2017. 6
|
| 286 |
+
[23] Daliang Li and Junpu Wang. Fedmd: Heterogenous federated learning via model distillation. CoRR, abs/1910.03581, 2019. 1, 2
|
| 287 |
+
[24] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429-450, 2020. 2
|
| 288 |
+
[25] Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In ICML, pages 6357-6368. PMLR, 2021. 2
|
| 289 |
+
[26] Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li, Zhimeng Zhang, Yongheng Wang, and Chao Wu. Towards effective clustered federated learning: A peer-to-peer framework with adaptive neighbor matching. IEEE Transactions on Big Data, 2022. 3
|
| 290 |
+
[27] Paul Pu Liang, Terrance Liu, Liu Ziyin, Nicholas B Allen, Randy P Auerbach, David Brent, Ruslan Salakhutdinov, and Louis-Philippe Morency. Think locally, act globally: Federated learning with local and global representations. arXiv preprint arXiv:2001.01523, 2020. 1, 2
|
| 291 |
+
[28] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated
|
| 292 |
+
|
| 293 |
+
learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 1, 2
|
| 294 |
+
[29] Jinhua Liu, Christian Desrosiers, and Yuanfeng Zhou. Attmoe: attention-based mixture of experts for nuclear and cytoplasmic segmentation. Neurocomputing, 411:139-148, 2020. 4
|
| 295 |
+
[30] Yuan Liu, Zhengpeng Ai, Shuai Sun, Shuangfeng Zhang, Zelei Liu, and Han Yu. Fedcoin: A peer-to-peer payment system for federated learning. In Federated learning: privacy and incentive, pages 125-138. Springer, 2020. 3
|
| 296 |
+
[31] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 6
|
| 297 |
+
[32] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision (ECCV), pages 116-131, 2018. 6
|
| 298 |
+
[33] Yishay Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619, 2020. 2
|
| 299 |
+
[34] Othmane Marfoq, Giovanni Neglia, Richard Vidal, and Laetitia Kameni. Personalized federated learning through local memorization. In ICML, pages 15070-15092. PMLR, 2022. 2
|
| 300 |
+
[35] Othmane Marfoq, Giovanni Neglia, Richard Vidal, and Laetitia Kameni. Personalized federated learning through local memorization. In Proceedings of the 39th International Conference on Machine Learning, pages 15070–15092. PMLR, 2022. 2
|
| 301 |
+
[36] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In AISTATS, pages 1273-1282. PMLR, 2017. 1, 2
|
| 302 |
+
[37] Jed Mills, Jia Hu, and Geyong Min. Multi-task federated learning for personalised deep neural networks in edge computing. IEEE Transactions on Parallel and Distributed Systems, 33(3):630-641, 2021. 2
|
| 303 |
+
[38] Zhen Qin, Xueqiang Yan, Mengchu Zhou, and Shuiguang Deng. Blockfl: A blockchain-based fully decentralized peer-to-peer federated learning framework. In Proceedings of the ACM on Web Conference 2024, pages 2914-2925, 2024. 3
|
| 304 |
+
[39] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234-241. Springer, 2015. 6
|
| 305 |
+
[40] Abhijit Guha Roy, Shayan Siddiqui, Sebastian Pölsterl, Nassir Navab, and Christian Wachinger. Brantorrent: A peer-to-peer environment for decentralized federated learning. arXiv preprint arXiv:1905.06731, 2019. 3
|
| 306 |
+
[41] Jose L Salmeron, Irina Arevalo, and Antonio Ruiz-Celma. Benchmarking federated strategies in peer-to-peer federated learning for biomedical data. Heliyon, 9(6), 2023. 3
|
| 307 |
+
|
| 308 |
+
[42] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh-moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510-4520, 2018. 6
|
| 309 |
+
[43] Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems, 32(8):3710-3722, 2020. 2
|
| 310 |
+
[44] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 6
|
| 311 |
+
[45] Fabio A. Spanhol, Luiz S. Oliveira, Caroline Petitjean, and Laurent Heutte. A dataset for breast cancer histopathological image classification. IEEE Transactions on Biomedical Engineering, 63(7):1455-1462, 2016. 5, 6
|
| 312 |
+
[46] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023. 5
|
| 313 |
+
[47] Han Wang, Luis Muñoz-González, David Eklund, and Shahid Raza. Non-iid data re-balancing at iot edge with peer-to-peer federated learning for anomaly detection. In Proceedings of the 14th ACM conference on security and privacy in wireless and mobile networks, pages 153-163, 2021. 3
|
| 314 |
+
[48] Jiacheng Wang, Yueming Jin, and Liansheng Wang. Personalizing federated medical image segmentation via local calibration. In ECCV, pages 456-472. Springer, 2022. 2
|
| 315 |
+
[49] Luyuan Xie, Cong Li, Zirui Wang, Xin Zhang, Boyan Chen, Qingni Shen, and Zhonghai Wu. Shisrcnet: Super-resolution and classification network for low-resolution breast cancer histopathology image, 2023. 5, 6
|
| 316 |
+
[50] Luyuan Xie, Cong Li, Xin Zhang, Shengfang Zhai, Yuejian Fang, Qingni Shen, and Zhonghai Wu. Trls: A time series representation learning framework via spectrogram for medical signal processing, 2024. 6
|
| 317 |
+
[51] Luyuan Xie, Manqing Lin, Siyuan Liu, ChenMing Xu, Tianyu Luan, Cong Li, Yuejian Fang, Qingni Shen, and Zhonghai Wu. pflfe: Cross-silo personalized federated learning via feature enhancement on medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 599-610. Springer, 2024. 2
|
| 318 |
+
[52] Luyuan Xie, Manqing Lin, Tianyu Luan, Cong Li, Yuejian Fang, Qingni Shen, and Zhonghai Wu. Mh-pflid: Model heterogeneous personalized federated learning via injection and distillation for medical data analysis. arXiv preprint arXiv:2405.06822, 2024. 1, 2, 6
|
| 319 |
+
[53] Luyuan Xie, Manqing Lin, ChenMing Xu, Tianyu Luan, Zhipeng Zeng, Wenjun Qian, Cong Li, Yuejian Fang, Qingni Shen, and Zhonghai Wu. Mh-pflgb: Model heterogeneous personalized federated learning via global bypass for medical image analysis. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 534-545. Springer, 2024. 1, 2
|
| 320 |
+
[54] Saining Xie, Ross Girshick, Piotr Dólár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep
|
| 321 |
+
|
| 322 |
+
neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017. 6
|
| 323 |
+
[55] Liping Yi, Han Yu, Gang Wang, and Xiaoguang Liu. Fedlora: Model-heterogeneous personalized federated learning with lora tuning. CoRR, abs/2310.13283, 2023. 2
|
| 324 |
+
[56] Liping Yi, Han Yu, Gang Wang, and Xiaoguang Liu. pfedes: Model heterogeneous personalized federated learning with feature extractor sharing. CoRR, abs/2311.06879, 2023. 1, 2
|
| 325 |
+
[57] G. Zerveas, S. Jayaraman, D. Patel, A. Bhamidipaty, and C. Eickhoff. A transformer-based framework for multivariate time series representation learning. 2021. 6
|
| 326 |
+
[58] Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wenchao Xu, and Feijie Wu. Parameterized knowledge transfer for personalized federated learning. In Advances in Neural Information Processing Systems, pages 10092-10104. Curran Associates, Inc., 2021. 1, 2
|
| 327 |
+
[59] Yang Zhao, Jun Zhao, Linshan Jiang, Rui Tan, and Dusit Niyato. Mobile edge computing, blockchain and reputation-based crowdsourcing iot federated learning: A secure, decentralized and privacy-preserving system. arXiv preprint arXiv:1906.10893, pages 2327-4662, 2019. 3
|
| 328 |
+
[60] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE transactions on medical imaging, 39(6):1856-1867, 2019. 6
|
CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:809125a96b2c7cbe283e71bbc917de85a9afabc70cb75e66fb2d8e12a95f0ac7
|
| 3 |
+
size 871575
|
CVPR/2025/dFLMoE_ Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd9cdf67a4f8b474b98655df685640314c8da18efe5271e755aa86bc00d6bf77
|
| 3 |
+
size 447287
|
CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/319bbfa4-f317-493b-a350-5e2125370b5a_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da8192f3b7fe062c6f9809cc23f07712358c30fcc636451d913117304945b6c9
|
| 3 |
+
size 89710
|
CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/319bbfa4-f317-493b-a350-5e2125370b5a_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f1e3970d1b9f2f45ac2a4fd535b75d41dbad2ea60aef31783dae42a0443f5e3d
|
| 3 |
+
size 112413
|
CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/319bbfa4-f317-493b-a350-5e2125370b5a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e08935b07618a385302458f30b83404594fe7239223606d60b448e2d2eeae754
|
| 3 |
+
size 3767671
|
CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/full.md
ADDED
|
@@ -0,0 +1,318 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# g3D-LF: Generalizable 3D-Language Feature Fields for Embodied Tasks
|
| 2 |
+
|
| 3 |
+
Zihan Wang Gim Hee Lee
|
| 4 |
+
|
| 5 |
+
School of Computing, National University of Singapore
|
| 6 |
+
|
| 7 |
+
zihan.wang@u.nus.edu
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
We introduce Generalizable 3D-Language Feature Fields (g3D-LF), a 3D representation model pre-trained on large-scale 3D-language dataset for embodied tasks. Our g3D-LF processes posed RGB-D images from agents to encode feature fields for: 1) Novel view representation predictions from any position in the 3D scene; 2) Generations of BEV maps centered on the agent; 3) Querying targets using multi-granularity language within the above-mentioned representations. Our representation can be generalized to unseen environments, enabling real-time construction and online updates. By volume rendering latent features along sampled rays and integrating semantic and spatial relationships through multiscale encoders, our g3D-LF produces representations at different scales and perspectives, aligned with multi-granularity language, via multi-level contrastive learning. Furthermore, we prepare a large-scale 3D-language dataset to align the representations of the feature fields with language. Extensive experiments on Vision-and-Language Navigation under both Panorama and Monocular settings, Zero-shot Object Navigation, and Situated Question Answering tasks highlight the significant advantages and effectiveness of our g3D-LF for embodied tasks. The code is available at https://github.com/MrZihan/g3D-LF.
|
| 12 |
+
|
| 13 |
+
# 1. Introduction
|
| 14 |
+
|
| 15 |
+
Embodied agents seek to understand 3D environments, enabling interaction with environments and human by performing tasks such as Question Answering [4, 37, 39], Navigation [3, 6, 27, 28, 38, 61], etc. To this end, various 3D scene representation models tailored for embodied tasks have been proposed, including point cloud-based models [11, 22, 72], 3D occupancy [34], hybrid voxel [14], and feature fields [43, 48, 56, 63].
|
| 16 |
+
|
| 17 |
+
For multimodal embodied tasks in large-scale scenes, 3D representation models typically need: 1) generalization to unseen scenes, 2) construct and update representations in real time, and 3) open-vocabulary semantic space. The generalizable 3D feature fields provide the above advantages and has been widely explored across various embodied tasks.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1. Our g3D-LF uses posed RGB-D images from the agent to predict novel view and BEV map representations at various scales within the 3D scene, aligned with multi-granularity language through 3D-language pre-training. The representation is applicable to embodied tasks like visual navigation and embodied question answering, facilitating scene representation, language-guided querying, and navigation planning.
|
| 21 |
+
|
| 22 |
+
Unlike point cloud-based models that depend on complete and low-noise point clouds which are less robust, the implicit representations of the feature fields are derived from the 2D foundation model, preserving semantic expressiveness even with few-shot observations from 3D scenes. As shown in Figure 1, the feature fields model uses RGB-D images as input to encode and update implicit scene representations, which are then used to predict novel view, panorama and BEV map representations associated with language through volume rendering. These predicted representations can assist embodied tasks such as navigation planning [43, 56, 57], etc. However, several significant drawbacks remain in these feature fields models: 1) The supervision for the predicted representations comes from 2D foundation models, e.g., CLIP [44] and DINOv2 [41] greatly limits the understanding for 3D spatial relationships; 2) These models are trained without language supervision, resulting in a substantial gap with language semantics; 3) The large-scale representations, e.g., panorama
|
| 23 |
+
|
| 24 |
+
and BEV map from feature fields is particularly challenging for long text understanding. These issues severely limit the potential of the feature fields model on language-guided embodied tasks.
|
| 25 |
+
|
| 26 |
+
To circumvent the above-mentioned issues, we introduce Generalizable 3D-Language Feature Fields (g3D-LF), a 3D representation model pre-trained on large-scale 3D-language dataset for embodied tasks. We first curate and consolidate a large amount of 3D-language data from previous works [7, 23, 65] to train our g3D-LF model. These data include 5K indoor scenes and almost 1M language descriptions of multiple granularities. The text annotations include object categories, object characteristics, object relationships, and the spatial layout of the entire scene, which are employed to supervise multiscale encoders of the g3D-LF model. We then design our g3D-LF model to learn generalizable 3D-language feature fields. To this end, we employ multi-level contrastive learning for multi-scale encoders to align predicted representations and language across different scales. For the regional representation within the novel view, a contrastive loss is calculated across 1,883 indoor object categories. For the predicted novel view representation, both the CLIP visual representations and language are employed for contrastive training to balance generalization ability and language alignment. For large-scale panorama and BEV representations, we propose the fine-grained contrastive learning based on the affinity matrix to achieve long text understanding.
|
| 27 |
+
|
| 28 |
+
The pre-trained g3D-LF model is subsequently evaluated on various embodied tasks, including vision-and-language navigation (monocular setting [57] and panorama setting [56]), zero-shot object navigation [61], and situated question answering [37], gains significant performance improvements. In this work, our main contributions include:
|
| 29 |
+
|
| 30 |
+
- This work proposes the Generalizable 3D-Language Feature Fields (g3D-LF) with a multi-level contrastive learning framework to align the multi-scale representations of feature fields with multi-granularity language.
|
| 31 |
+
- Our proposed g3D-LF model improves multiple baseline methods to state-of-the-art performance across various embodied tasks, thus validating the potential of our generalizable feature fields for Embodied AI.
|
| 32 |
+
|
| 33 |
+
# 2. Related Work
|
| 34 |
+
|
| 35 |
+
Generalizable 3D Feature Fields. The neural radiance field (NeRF) [40] has gained significant popularity in various AI tasks, which predicts the RGB image from an arbitrary viewpoint in a 3D scene. Furthermore, some works leverage NeRF-based methods to predict novel view representations instead of RGB values, enabling 3D semantic segmentation [50] and 3D language grounding [24]. However, these methods with implicit MLP networks can only synthesize novel view representations in seen scenes, which makes it
|
| 36 |
+
|
| 37 |
+
difficult to generalize to unseen large-scale scenes and adapt to many embodied AI tasks (e.g., navigation). To this end, some works [43, 49, 56] attempt to encode 2D visual observations into 3D representations (called Generalizable 3D Feature Fields) via the depth map. Through volume rendering [40], these models decode novel view representations from the feature fields and align them with open-world features (e.g., CLIP embeddings [44]). The 3D feature fields can generalize to unseen scenes, enabling real-time construction and online updates. However, the drawback of these models lies in the fact that the supervision of their predicted representations comes from 2D visual models, which limits their performance in language-guided embodied tasks. Our work offers a feasible approach to training the 3D feature fields model with large-scale 3D-language data.
|
| 38 |
+
|
| 39 |
+
Vision-and-Language Navigation. Vision-and-Language Navigation (VLN) [3, 9, 19, 27, 42, 53, 68] requires the agent understand complex natural language instructions and navigate to the described destination using low-level actions, e.g., turn left 15 degrees, turn right 15 degrees, or move forward 0.25 meters. To address inefficiencies and poor performance in atomic action prediction, some works [20, 26, 57] develop waypoint predictors to generate several candidate waypoints around the agent. The navigation policy model can then select the optimal waypoint as the next sub-goal and execute atomic actions to move, greatly enhancing planning efficiency. In this context, how to represent waypoints and carry out planning have become critical. Some works use a topological map [2, 10] or BEV map [1, 32, 55] to represent semantic relationships between waypoints, while some [56, 57] explore feature fields to predict waypoint representations of novel views and improve navigation planning. Our g3D-LF model further improves the performance of methods using feature fields.
|
| 40 |
+
|
| 41 |
+
Zero-shot Object Navigation. In object-goal navigation [6, 46, 67], an agent is tasked with locating a specified object within indoor environments. Typically, reinforcement learning [71] is used to train a policy network that predicts actions, while object detection [35, 51] or segmentation models [18, 25, 64] help identify the object. However, these navigation models are often limited to specific objects, making open-vocabulary navigation challenging and hindering generalization in real-world applications [17]. To address this issue, zero-shot navigation methods have emerged [15, 38, 61, 70], leveraging Vision-and-Language Models (VLMs) [30, 31, 44] to identify potential directions or areas containing the target, followed by using the pretrained pointgoal navigation models [58] to search the potential areas. Considering that general 2D VLMs are not fully suited for indoor 3D environments and to the best of our knowledge, we are the first to attempt using the indoor 3D feature fields model for zero-shot object navigation.
|
| 42 |
+
|
| 43 |
+
Situated Question Answering. The Embodied Question Answering tasks [4, 13, 39] require the agent to observe the 3D environment and answer questions from humans. Furthermore, Situated Question Answering [37] requires advanced 3D spatial understanding of the agent to answer the question and to interpret and locate the position and orientation of the textual description. Compared to previous works [14, 22, 23] using point clouds, we only use RGB-D images to encode feature fields and leverage their multi-scale representations for localization and question answering.
|
| 44 |
+
|
| 45 |
+
# 3. Our Method
|
| 46 |
+
|
| 47 |
+
# 3.1. 3D-Language Data
|
| 48 |
+
|
| 49 |
+
We prepare a large-scale 3D-language dataset to align the representations of the feature fields with language. Our dataset includes about 5K 3D indoor scenes, mainly sourced from the single-room scans ScanNet [12], multi-room house scans of the Habitat-Matterport 3D dataset (HM3D) [45, 59], and the photo-realistic multi-room scenes of Structured3D [69]. The total number of language annotations is close to one million, which are mainly sourced from the SceneVerse dataset [23]. SceneVerse uses 3D scene graphs and large language models (LLMs) to automate high-quality object-level and scene-level descriptions. The annotations also include the large set of human-annotated object referrals [7].
|
| 50 |
+
|
| 51 |
+
We organize the dataset as follows to streamline feature fields training: 1) For each 3D scene, the agent can observe numerous RGB-D images and its corresponding poses as inputs. 2) An instance-level point clouds mark each instance in the scene with an instance ID which can be used to retrieve associated language descriptions from the database. It is thus easy to get instances that are near any given point in the 3D scene and obtain their language descriptions. This enables the training code to efficiently obtain language annotations for specific regions within a novel view or a BEV map.
|
| 52 |
+
|
| 53 |
+
# 3.2. 3D-Language Feature Fields
|
| 54 |
+
|
| 55 |
+
Feature Fields Encoding. As shown in Figure 2, our g3D-LF model follows HNR [56] to take a posed RGB image as input and uses the CLIP image encoder to extract fine-grained visual features $\{\mathbf{g}_{t,i}\in \mathbb{R}^{768}\}_{i = 1}^{I}$ . $\mathbf{g}_{t,i}$ denotes the $i$ -th feature patch of the CLIP feature map extracted from $t$ -th frame observed by the agent. We then map $\mathbf{g}_{t,i}$ to the corresponding 3D world coordinates $\{P_{t,i}\}_{i = 1}^{I}$ using the depth map and camera parameters.
|
| 56 |
+
|
| 57 |
+
For each feature $\mathbf{g}_{t,i}$ , the observed horizontal orientation $\theta_{t,i}$ and the regional size $s_{t,j}$ are also calculated and stored to enhance the spatial representation. The set of feature points $\mathcal{M}$ can therefore be updated online as:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\mathcal {M} _ {t} = \mathcal {M} _ {t - 1} \cup \left\{\left[ \mathbf {g} _ {t, i}, P _ {t, i}, \theta_ {t, i}, s _ {t, i} \right] \right\} _ {i = 1} ^ {I}. \tag {1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Ray-View-Panorama Encoding. The $\mathbf{MLP}_{view}$ network
|
| 64 |
+
|
| 65 |
+
aggregates nearby features within feature fields $\mathcal{M}$ and encode their spatial information [56] (i.e., relative positions and relative directions) to predict semantic representations $\mathbf{r} \in \mathbb{R}^{768}$ and volume density $\sigma \in \mathbb{R}^1$ at any point from any direction in the continuous fields.
|
| 66 |
+
|
| 67 |
+
For each novel view, our g3D-LF model generates a feature map $\mathbf{R} \in \mathbb{R}^{12 \times 12 \times 768}$ by predicting subregion features through volume rendering within feature fields. The model samples $N$ points along the ray from the camera position to each subregion center to search for the k-nearest features and predicting volume density $\sigma_{n}$ and latent representation $\mathbf{r}_{n}$ , which then are composited into a subregion feature:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\mathbf {R} _ {(u, v)} = \sum_ {n = 1} ^ {N} \tau_ {n} \left(1 - \exp \left(- \sigma_ {n} \Delta_ {n}\right)\right) \mathbf {r} _ {n}, \tag {2}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where
|
| 74 |
+
|
| 75 |
+
Here, $\tau_{n}$ represents volume transmittance and $\Delta_{n}$ is the distance between sampled points. $\mathbf{R}_{(u,v)}$ denotes the regional feature at the $u$ -th row and $v$ -th column of the novel view feature map $\mathbf{R}$ . We integrate context of the surrounding by feeding the feature map $\mathbf{R}$ together with a learnable view token $\mathbf{V} \in \mathbb{R}^{768}$ into the transformer-based view encoder to obtain the encoded $\mathbf{R}'$ and novel view representation $\mathbf{V}'$ that represent the entire novel view. Furthermore, to reason relationships across multiple views within a panorama, our g3D-LF model predicts 12 novel views $\{\mathbf{V}_i'\}_{i=1}^{12}$ around the viewpoint at 30-degree intervals and combines them into a transformer-based panorama encoder to obtain $\{\mathbf{V}_i''\}_{i=1}^{12}$ .
|
| 76 |
+
|
| 77 |
+
Ray-BEV Encoding. The novel view and panorama representations are insufficient for larger-scale scene understanding. To circumvent this problem, we propose to construct BEV map representation via our g3D-LF as shown in Figure 2. Unlike novel view prediction where rays are emitted from the viewpoint along the viewing cone, the rendering rays for the BEV map are rendered vertically from top to bottom. The starting point of the rendered ray is set slightly below the ceiling to avoid being blocked.
|
| 78 |
+
|
| 79 |
+
Specifically, the $\mathbf{MLP}_{BEV}$ network is used to aggregate the nearest feature points to the sampled point and predict its semantic representation $\hat{\mathbf{r}}_n$ and volume density $\hat{\sigma}_n$ in the continuous field. Subsequently, the ray representation $\hat{\mathbf{R}}_{(h,w)}\in \mathbb{R}^{768}$ can be obtained using the similar volume rendering method of Equation 2, where $(h,w)$ denotes the $h$ -th row and $w$ -th column of the BEV map $\hat{\mathbf{R}}\in \mathbb{R}^{168\times 168\times 768}$ . To cover the large scene, the BEV map $\hat{\mathbf{R}}$ encompasses a $16.8\mathrm{m}\times 16.8\mathrm{m}$ area centered on the agent. After down-sampling the BEV map to $\hat{\mathbf{R}}_{conv}\in \mathbb{R}^{24\times 24\times 768}$ through a non-overlapping $7\times 7$ convolution layer, the transformer-based BEV map encoder captures semantic relationships between different regions to get the encoded BEV map representations $\hat{\mathbf{R}}^{\prime}\in \mathbb{R}^{24\times 24\times 768}$ .
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
Figure 2. Overview of our g3D-LF model. Our model encodes the observed RGB-D images into the feature fields (consists of many feature points). Through aggregating k-nearest features, the MLP networks predict the latent feature and volume density of sampled points along the rendered ray. The hierarchical encoders further generate representations of novel view, panorama, and BEV map, then conduct multi-level contrastive learning with multi-granularity language.
|
| 83 |
+
|
| 84 |
+
# 3.3. Multi-level Contrastive Learning
|
| 85 |
+
|
| 86 |
+
Balanced Object-level Alignment. We apply contrastive supervision using an object vocabulary $\mathcal{O} \in \mathbb{R}^{1883 \times 768}$ that spans 1,883 indoor object categories for supervision of the MLP_view and MLP_BEV networks to predict latent features in feature fields. For ray representations $\mathbf{R}$ obtained via volume rendering, the cosine similarities $\{\mathrm{CosSim}(\mathbf{R}, \mathcal{O}_i)\}_{i=1}^{1883}$ are computed with each vocabulary embedding. The training objective is to maximize and minimize similarity for the correct and other object category, respectively, i.e.:
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
\mathcal {L} _ {\text {o b j e c t}} = \operatorname {C r o s s E n t r o p y} \left(\left\{\operatorname {C o s S i m} (\mathbf {R}, \mathcal {O} _ {i}) / \tau \right\} _ {i = 1} ^ {1 8 8 3}, \mathcal {O} ^ {g t}\right), \tag {3}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where $\mathcal{O}^{gt}$ denotes the ground-truth category and $\tau$ is the temperature coefficient for contrastive learning. Similarly, the object alignment loss for the ray representations $\hat{\mathbf{R}}$ of the BEV map denoted as $\hat{\mathcal{L}}_{\text {object }}$ can also be calculated.
|
| 93 |
+
|
| 94 |
+
We notice the network struggles to recognize smaller objects such as the lamp due to the dominance of some objects (e.g., floor and walls) leading to long-tailed distribution in the indoor scenes. To address this issue, we implement a balanced loss that emphasizes harder-to-recognize objects. Specifically, the weight of loss for the rays of top $10\%$ cross entropy are significantly increased using a scaling factor $\alpha$ for ray representations within the novel view or BEV map. In short, rays with higher cross entropy indicate harder-to
|
| 95 |
+
|
| 96 |
+
recognize objects and therefore have a higher loss weight.
|
| 97 |
+
|
| 98 |
+
Fine-grained Contrastive for Long Text. To enable our g3D-LF model to understand object relationships and spatial layouts, we propose a fine-grained contrastive learning method for long text alignment. As shown in Figure 2, our g3D-LF aligns the BEV features in a window $(e.g., 5 \times 5)$ with the long text features to enhance the representation of the BEV map for spatial semantics. Specifically, centered on an instance, the BEV features $\mathbf{B} = \{\hat{\mathbf{R}}_m'\}_{m=1}^{25}$ within the window are associated with $L$ word features $\mathbf{T} = \{\mathbf{W}_l\}_{l=1}^L$ from the CLIP text encoder through an affinity matrix $\mathbf{A}$ :
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathbf {A} _ {(m, l)} = \operatorname {C o s S i m} \left(\hat {\mathbf {R}} _ {m} ^ {\prime}, \mathbf {W} _ {l}\right) / \tau . \tag {4}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
The highest $L$ similarity scores (equal to the number of words) are extracted from the affinity matrix $\mathbf{A}$ , and their average is used as the fine-grained similarity score between the BEV window and the long text features:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\operatorname {F i n e S i m} (\mathbf {B}, \mathbf {T}) = \operatorname {A v g} (\operatorname {T o p k} (\mathbf {A}, L)). \tag {5}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
Denoting the BEV features within the $i$ -th window as $\mathbf{B}_i$ and the $j$ -th text features as $\mathbf{T}_j$ , the fine-grained contrastive
|
| 111 |
+
|
| 112 |
+
learning loss can be calculated as:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\begin{array}{l} \hat {\mathcal {L}} _ {l o n g. t e x t} = \frac {1}{J} \sum_ {j = 1} ^ {J} \mathrm {C r o s s E n t r o p y} (\{\mathrm {F i n e S i m} (\mathbf {B} _ {i}, \mathbf {T} _ {j}) \} _ {i = 1} ^ {I}, j) \\ + \frac {1}{I} \sum_ {i = 1} ^ {I} \operatorname {C r o s s E n t r o p y} \left(\left\{\operatorname {F i n e S i m} \left(\mathbf {T} _ {j}, \mathbf {B} _ {i}\right) \right\} _ {j = 1} ^ {J}, i\right). \tag {6} \\ \end{array}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
Here, $I$ denotes the number of BEV windows, and $J$ is the number of long texts per contrastive learning batch, $I$ is equal to $J$ . Similarly, our g3D-LF model performs fine-grained contrastive learning between encoded panoramic representations $\{\mathbf{V}_i^{\prime \prime}\}_{i = 1}^{12}$ and long-text features $\mathbf{T} = \{\mathbf{W}_n\}_{n = 1}^N$ to compute the fine-grained contrastive loss $\mathcal{L}_{long.text}$ .
|
| 119 |
+
|
| 120 |
+
CLIP Knowledge Distillation. Since the 3D-language data is orders of magnitude smaller than image-language data (millions vs. billions [44]), our g3D-LF model still distills visual features from CLIP model [44] to ensure robust generalization. Specifically, our g3D-LF uses CLIP features extracted from the ground-truth novel view or corresponding region image for contrastive supervision on the predicted new view representation $\mathbf{V}^{\prime}$ , the panorama representation $\mathbf{V}_i^{\prime \prime}$ , and the BEV map representation $\hat{\mathbf{R}}_i^{\prime}$ , i.e.:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\mathcal {L} _ {\text {v i e w - c l i p}} = \frac {1}{I} \sum_ {i = 1} ^ {I} \operatorname {C r o s s E n t r o p y} \left(\left\{\operatorname {C o s S i m} \left(\mathbf {V} _ {i} ^ {\prime}, \mathbf {V} _ {j} ^ {g t}\right) / \tau \right\} _ {j = 1} ^ {J}, i\right), \tag {7}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where $\mathbf{V}_j^{gt}$ denotes the ground truth CLIP feature for $j$ -th novel view representation $\mathbf{V}_j'$ . Similarly, the contrastive loss $\mathcal{L}_{pano\_clip}$ for the panoramic representation and $\mathcal{L}_{bev\_clip}$ for the BEV map can also be computed.
|
| 127 |
+
|
| 128 |
+
# 3.4. Embodied Tasks
|
| 129 |
+
|
| 130 |
+
To verify the effectiveness of our g3D-LF model for embodied tasks, we integrate the predicted representations from our model into existing baseline methods and evaluates performance on Vision-and-Language Navigation, Zero-shot Object Navigation, and Situated Question Answering tasks.
|
| 131 |
+
|
| 132 |
+
Vision-and-Language Navigation. We evaluate the g3D-LF model on VLN tasks with two settings. The first setting is with the monocular camera, which only allows the agent to observe the forward-facing view. As shown in Figure 3, the VLN-3DFF [57] is a monocular VLN model that predicts candidate waypoints around the agent using a semantic map, and predicts each candidate's representation with generalizable feature fields [56] and then selects the optimal waypoint to move through a cross-modal graph encoder [2, 10]. Based on this baseline method, we incorporate novel view representations from our g3D-LF model and input the BEV map into the cross-modal graph encoder following GridMM [55] to enhance spatial layout understanding. The second setting is with the panorama camera, in which the agent can observe
|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
Figure 3. Monocular VLN framework based on VLN-3DFF [57].
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
Figure 4. Zero-shot object navigation framework based on VLFM [61].
|
| 139 |
+
|
| 140 |
+
12 RGB-D view images within the panorama. Following HNR [56], a waypoint predictor [20] is used to predict candidate waypoints, and our g3D-LF model generates panorama representations of these waypoints for navigation planning.
|
| 141 |
+
|
| 142 |
+
Zero-shot Object Navigation. As shown in Figure 4, unlike the baseline method VLFM [61] that uses the 2D foundation model BLIP-2 [30] to calculate the similarity between the target object and visual observations to construct the value map, we use our g3D-LF to predict the value of potential regions. Although the monocular agent can only observe the forward view, our g3D-LF predicts 12 novel view feature maps surrounding the agent within panorama based on historical observations, and calculates max similarity in feature map with the target object. The text features of the target object are also used to calculate the similarity with each region representation on the BEV map to obtain a larger-scale value map. Combining these two value maps, the navigation agent prioritizes traveling to the candidate waypoint with the highest similarity score.
|
| 143 |
+
|
| 144 |
+
Situated Question Answering. A three-stage framework is shown in Figure 5, where we use our g3D-LF to train three transformer-based decoders for position, orientation and answer predictions. First, the Localization Decoder predicts the heatmap for location of the textual description based on the BEV map. Our g3D-LF model generates the panorama
|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
Figure 5. The framework of situated question answering [37].
|
| 148 |
+
|
| 149 |
+
representations around the predicted location, which are then processed by the Orientation Decoder to predict the orientation. Finally, the textual description, question, BEV map, and panorama representations are fed into the Answer Decoder to generate the final answer.
|
| 150 |
+
|
| 151 |
+
# 4. Experiments
|
| 152 |
+
|
| 153 |
+
# 4.1. Experiment Setup and Metrics
|
| 154 |
+
|
| 155 |
+
g3D-LF Pre-training. We pre-train our g3D-LF model shown in Figure 2 on 5K 3D scenes. During training, 30 frames are uniformly sampled from the RGB-D video of each scene in the ScanNet [12] dataset to construct the feature fields, with an additional frame randomly selected as the novel view for prediction. The g3D-LF then predicts the panorama representation and BEV map centered on the camera of this novel view. For each ray in the novel view or BEV map, the corresponding instance ID can be searched by calculating the nearest instance point to the rendered surface within the annotated instance point cloud. The language annotations of the novel view, panorama, and BEV map can thus be obtained by retrieving language annotations with their instance IDs from the database for training. Due to the limited number of images per scene (fewer than 20), we use all available images from the Structured3D [69] dataset for training. We follow HNR [56] for the HM3D [45, 59] dataset using the Habitat simulator [47] to randomly sample navigation trajectories and the observed RGB-D images to predict the novel views and panoramicas around candidate waypoints, and construct the BEV map centered on the agent. The multi-level contrastive losses described in Section 3.3 are utilized to optimize the g3D-LF model.
|
| 156 |
+
|
| 157 |
+
Finally, we combine scenes from all datasets and pretrain our g3D-LF model for 50K episodes (about 10 days) on two RTX 6000 Ada GPUs. To ensure fair comparisons on downstream tasks, all training data only includes the train split, the val and test splits are removed.
|
| 158 |
+
|
| 159 |
+
Vision-and-Language Navigation. We evaluate the VLN model on the VLN-CE dataset [27] in both monocular [57] and panorama [56] settings. R2R-CE is collected based on the Matterport3D [5] scenes with the Habitat simulator [47].
|
| 160 |
+
|
| 161 |
+
The R2R-CE dataset includes 5,611 trajectories divided into train, validation seen, validation unseen, and test unseen splits. Each trajectory has three English instructions with an average path length of 9.89 meters and an average instruction length of 32 words. Several standard metrics [3] are used to evaluate VLN performance: Navigation Error (NE), Success Rate (SR), SR given the Oracle stop policy (OSR), Success Rate weighted by normalized inverse Path Length (SPL).
|
| 162 |
+
|
| 163 |
+
Zero-shot Object Navigation. For object navigation, we evaluate our approach using the Habitat simulator [47] on the validation splits of two different datasets HM3D [45] and MP3D [5]. The HM3D validation split contains 2,000 episodes across 20 scenes and 6 object categories. The MP3D validation split contains 2,195 episodes across 11 scenes and 21 object categories. The main metrics [3] include Success Rate (SR) and Success Rate weighted by normalized inverse Path Length (SPL).
|
| 164 |
+
|
| 165 |
+
Situated Question Answering. Following ScanNet [12], the SQA3D dataset comprises 20.4k descriptions and 33.4k diverse questions, which is split into train, val, and test sets. The main metric is the Exact Match (EM@1) of the answer. Additionally, for localization evaluation, Acc@0.5m and Acc@1.0m metric means the prediction is counted as correct when the predicted position is within 0.5 meter and 1.0 meter range to the ground truth position. The Acc@15° and Acc@30° metric means the prediction is counted as correct when the prediction orientation is within 15° and 30° range to the ground truth orientation.
|
| 166 |
+
|
| 167 |
+
# 4.2. Comparison with SOTA Methods
|
| 168 |
+
|
| 169 |
+
As shown in Table 1 and Table 2, we evaluate the VLN performance of our g3D-LF model on the R2R-CE dataset in both monocular and panorama settings, respectively. Table 1 shows that our g3D-LF significantly outperforms previous monocular VLN methods on the Success Rate (SR) metric, even compared to LLM-based methods such as NaVid [66] and InstructNav [36]. Compared to the panorama setting, monocular VLN has the advantage of being compatible with a broader range of real-world monocular robots. Our g3D-LF model overcomes the limitations of monocular cameras, enhancing the multi-view and BEV perception capabilities of the agent for monocular VLN.
|
| 170 |
+
|
| 171 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">LLM</td><td colspan="4">Val Unseen</td><td colspan="4">Test Unseen</td></tr><tr><td>NE↓</td><td>OSR↑</td><td>SR↑</td><td>SPL↑</td><td>NE↓</td><td>OSR↑</td><td>SR↑</td><td>SPL↑</td></tr><tr><td>CM2[16]</td><td>×</td><td>7.02</td><td>41.5</td><td>34.3</td><td>27.6</td><td>7.7</td><td>39</td><td>31</td><td>24</td></tr><tr><td>WS-MGMap [8]</td><td>×</td><td>6.28</td><td>47.6</td><td>38.9</td><td>34.3</td><td>7.11</td><td>45</td><td>35</td><td>28</td></tr><tr><td>NaVid [66]</td><td>✓</td><td>5.47</td><td>49.1</td><td>37.4</td><td>35.9</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>InstructNav* [36]</td><td>✓</td><td>6.89</td><td>-</td><td>31</td><td>24</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>VLN-3DFF [57]</td><td>×</td><td>5.95</td><td>55.8</td><td>44.9</td><td>30.4</td><td>6.24</td><td>54.4</td><td>43.7</td><td>28.9</td></tr><tr><td>g3D-LF (Ours)</td><td>×</td><td>5.70</td><td>59.5</td><td>47.2</td><td>34.6</td><td>6.00</td><td>57.5</td><td>46.3</td><td>32.2</td></tr></table>
|
| 172 |
+
|
| 173 |
+
Table 1. Evaluation of VLN on R2R-CE with monocular setting. * denotes zero-shot method.
|
| 174 |
+
|
| 175 |
+
We follow HNR [56] to perform lookahead exploration through predicted candidate waypoint representations for the panorama setting in Table 2. Although the results show minor performance gains and the advantages are not as pronounced as its monocular counterpart in Table 1, our g3D-LF model still achieves SOTA performance on the SPL metric and demonstrated competitive results on the SR metric.
|
| 176 |
+
|
| 177 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">LLM</td><td colspan="4">Val Unseen</td><td colspan="4">Test Unseen</td></tr><tr><td>NE↓</td><td>OSR↑</td><td>SR↑</td><td>SPL↑</td><td>NE↓</td><td>OSR↑</td><td>SR↑</td><td>SPL↑</td></tr><tr><td>Sim2Sim [26]</td><td>×</td><td>6.07</td><td>52</td><td>43</td><td>36</td><td>6.17</td><td>52</td><td>44</td><td>37</td></tr><tr><td>VLN-BERT [20]</td><td>×</td><td>5.74</td><td>53</td><td>44</td><td>39</td><td>5.89</td><td>51</td><td>42</td><td>36</td></tr><tr><td>GridMM [55]</td><td>×</td><td>5.11</td><td>61</td><td>49</td><td>41</td><td>5.64</td><td>56</td><td>46</td><td>39</td></tr><tr><td>Ego2-Map [21]</td><td>×</td><td>4.94</td><td>-</td><td>52</td><td>46</td><td>5.54</td><td>56</td><td>47</td><td>41</td></tr><tr><td>DREAM [52]</td><td>×</td><td>5.53</td><td>59</td><td>49</td><td>44</td><td>5.48</td><td>57</td><td>49</td><td>44</td></tr><tr><td>ScaleVLN [54]</td><td>×</td><td>4.80</td><td>-</td><td>55</td><td>51</td><td>5.11</td><td>-</td><td>55</td><td>50</td></tr><tr><td>ETPNav [2]</td><td>×</td><td>4.71</td><td>65</td><td>57</td><td>49</td><td>5.12</td><td>63</td><td>55</td><td>48</td></tr><tr><td>BEVBert [1]</td><td>×</td><td>4.57</td><td>67</td><td>59</td><td>50</td><td>4.70</td><td>67</td><td>59</td><td>50</td></tr><tr><td>HNR [56]</td><td>×</td><td>4.42</td><td>67</td><td>61</td><td>51</td><td>4.81</td><td>67</td><td>58</td><td>50</td></tr><tr><td>Energy [33]</td><td>×</td><td>4.69</td><td>65</td><td>58</td><td>50</td><td>5.08</td><td>64</td><td>56</td><td>48</td></tr><tr><td>g3D-LF (Ours)</td><td>×</td><td>4.53</td><td>68</td><td>61</td><td>52</td><td>4.78</td><td>68</td><td>58</td><td>51</td></tr></table>
|
| 178 |
+
|
| 179 |
+
In Table 3 for the Zero-shot Object Navigation, our g3D-LF achieves SOTA performance in the SPL metric and achieves competitive results in the SR metric. Notably, our g3D-LF is the only method that queries targets using feature fields instead of VLM. Replacement of BLIP-2 [30] in VLFM [61] with g3D-LF improves the navigation success rate (SR) by nearly $3\%$ . Although the HM3D experiments are not strictly zero-shot due to g3D-LF being pre-trained on its training scenes, our model still performs well on the MP3D benchmark without using its training scenes and object vocabulary, demonstrating strong generalization. Compared to methods using LLM: InstructNav [36] and SG-Nav [60], our g3D-LF also offers advantages in response time and computational cost.
|
| 180 |
+
|
| 181 |
+
Table 2. Evaluation of VLN on R2R-CE with panorama setting.
|
| 182 |
+
|
| 183 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">LLM</td><td rowspan="2">VLM</td><td rowspan="2">Feature Fields</td><td colspan="2">HM3D</td><td colspan="2">MP3D</td></tr><tr><td>SR↑</td><td>SPL↑</td><td>SR↑</td><td>SPL↑</td></tr><tr><td>ZSON [38]</td><td>×</td><td>✓</td><td>×</td><td>25.5</td><td>12.6</td><td>15.3</td><td>4.8</td></tr><tr><td>ESC [70]</td><td>✓</td><td>✓</td><td>×</td><td>39.2</td><td>22.3</td><td>28.7</td><td>14.2</td></tr><tr><td>VLFM [61]</td><td>×</td><td>✓</td><td>×</td><td>52.5</td><td>30.4</td><td>36.4</td><td>17.5</td></tr><tr><td>InstructNav [36]</td><td>✓</td><td>✓</td><td>×</td><td>58.0</td><td>20.9</td><td>-</td><td>-</td></tr><tr><td>GMap [62]</td><td>✓</td><td>✓</td><td>×</td><td>53.1</td><td>26.0</td><td>-</td><td>-</td></tr><tr><td>SG-Nav [60]</td><td>✓</td><td>✓</td><td>×</td><td>54.0</td><td>24.9</td><td>40.2</td><td>16.0</td></tr><tr><td>g3D-LF (Ours)</td><td>×</td><td>×</td><td>✓</td><td>55.6</td><td>31.8</td><td>39.0</td><td>18.8</td></tr></table>
|
| 184 |
+
|
| 185 |
+
In Table 4 for the Situated Question Answering task, our g3D-LF achieves good localization performance in metrics of Acc@0.5m, Acc@1m, Acc@15° and Acc@30°. Although our performance on the answering accuracy (EM@1) is significantly lower than that of LLM-based methods:
|
| 186 |
+
|
| 187 |
+
LEO [22] and Scene-LLM [14], it is worth noting that our g3D-LF only uses images as input without low-noise 3D point clouds. This actually offers a significant advantage in agent-centered embodied tasks since it is more adaptable to unseen real-world environments, where the low-noise point clouds are difficult to collect.
|
| 188 |
+
|
| 189 |
+
Table 3. Evaluation of Zero-shot Object Navigation on the HM3D and MP3D benchmarks.
|
| 190 |
+
|
| 191 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">LLM</td><td rowspan="2">PCD</td><td rowspan="2">Image</td><td colspan="2">Position</td><td colspan="2">Orientation</td><td>Answer</td></tr><tr><td>0.5m</td><td>1.0m</td><td>15°</td><td>30°</td><td>EM@1</td></tr><tr><td>ClipBERT [29]</td><td>×</td><td>×</td><td>✓</td><td>-</td><td>-</td><td>-</td><td>-</td><td>43.3</td></tr><tr><td>ScanQA [4]</td><td>×</td><td>✓</td><td>×</td><td>-</td><td>-</td><td>-</td><td>-</td><td>46.6</td></tr><tr><td>SQA3D [37]</td><td>×</td><td>✓</td><td>×</td><td>14.6</td><td>34.2</td><td>22.4</td><td>42.3</td><td>47.2</td></tr><tr><td>3D-VisTA [72]</td><td>×</td><td>✓</td><td>×</td><td>-</td><td>-</td><td>-</td><td>-</td><td>48.5</td></tr><tr><td>SceneVerse [23]</td><td>×</td><td>✓</td><td>×</td><td>-</td><td>-</td><td>-</td><td>-</td><td>49.9</td></tr><tr><td>LEO [22]</td><td>✓</td><td>✓</td><td>×</td><td>-</td><td>-</td><td>-</td><td>-</td><td>52.4</td></tr><tr><td>Scene-LLM [14]</td><td>✓</td><td>×</td><td>✓</td><td>-</td><td>-</td><td>-</td><td>-</td><td>54.2</td></tr><tr><td>g3D-LF (Ours)</td><td>×</td><td>×</td><td>✓</td><td>23.4</td><td>45.7</td><td>29.8</td><td>54.7</td><td>47.7</td></tr></table>
|
| 192 |
+
|
| 193 |
+
Table 4. Evaluation of Situated Question Answering (SQA3D) task. PCD denotes methods that use point clouds as input, while Image represents methods that use images as input.
|
| 194 |
+
|
| 195 |
+
# 4.3. Ablation Study
|
| 196 |
+
|
| 197 |
+
Performance impact of g3D-LF on embodied tasks. In row 1 of Table 5, the performance of monocular VLN and object navigation drops significantly without representations from g3D-LF. In this setting, the VLN model only uses the CLIP features from the forward-facing view with features of all other directions set to zero. The object navigation model uses BLIP-2 [30] instead of g3D-LF to construct the value map. Examining rows 2 and 3 shows that removing either the novel view or the BEV map reduces the performance of both two tasks, highlighting the role of each g3D-LF module.
|
| 198 |
+
|
| 199 |
+
Novel views are crucial for monocular VLN. As shown in row 1 and row 2 of Table 5, the novel view representations significantly boost VLN performance by overcoming the narrow perception of the monocular camera [57], enabling the monocular agent to have panoramic perception capabilities. To some extent, this confirms that novel view prediction is a very important and valuable capability for monocular agents. Based on this capability, the g3D-LF model predicts the novel view representations of candidate waypoints around the agent to construct the topological map for better navigation planning.
|
| 200 |
+
|
| 201 |
+
Object navigation requires balancing local and global targets. As shown in row 3 of Table 5, we observe that relying solely on BEV representation significantly reduces object navigation performance. This decline occurs because the global value map from the BEV map fails to select optimal nearby waypoints if the target is far from these waypoints. In this case, a local value map constructed from novel views is also essential to identify the optimal short-term goal, i.e., nearby waypoints around the agent.
|
| 202 |
+
|
| 203 |
+
<table><tr><td rowspan="2">View & Pano</td><td rowspan="2">BEV</td><td colspan="4">Monocular VLN</td><td colspan="2">Object Nav.</td></tr><tr><td>NE↓</td><td>OSR↑</td><td>SR↑</td><td>SPL↑</td><td>SR↑</td><td>SPL↑</td></tr><tr><td>×</td><td>×</td><td>6.54</td><td>44.6</td><td>33.1</td><td>23.4</td><td>52.5</td><td>30.4</td></tr><tr><td>✓</td><td>×</td><td>5.78</td><td>58.3</td><td>46.9</td><td>32.7</td><td>53.9</td><td>30.8</td></tr><tr><td>×</td><td>✓</td><td>6.02</td><td>53.1</td><td>42.8</td><td>26.5</td><td>50.2</td><td>27.1</td></tr><tr><td>✓</td><td>✓</td><td>5.70</td><td>59.5</td><td>47.2</td><td>34.6</td><td>55.6</td><td>31.8</td></tr></table>
|
| 204 |
+
|
| 205 |
+
Table 5. Ablation study for the modules of g3D-LF.
|
| 206 |
+
|
| 207 |
+
<table><tr><td rowspan="2">OBJ-CL</td><td rowspan="2">CLIP-CL</td><td rowspan="2">FG-CL</td><td colspan="4">Monocular VLN</td><td colspan="2">Object Nav.</td></tr><tr><td>NE↓</td><td>OSR↑</td><td>SR↑</td><td>SPL↑</td><td>SR↑</td><td>SPL↑</td></tr><tr><td>×</td><td>×</td><td>×</td><td>6.21</td><td>50.2</td><td>40.7</td><td>24.9</td><td>34.2</td><td>13.9</td></tr><tr><td>×</td><td>✓</td><td>×</td><td>5.84</td><td>56.1</td><td>44.6</td><td>31.1</td><td>47.6</td><td>27.8</td></tr><tr><td>✓</td><td>×</td><td>✓</td><td>6.01</td><td>53.5</td><td>42.4</td><td>26.7</td><td>55.8</td><td>31.6</td></tr><tr><td>unbalanced</td><td>✓</td><td>✓</td><td>5.73</td><td>58.3</td><td>46.6</td><td>33.0</td><td>51.7</td><td>28.8</td></tr><tr><td>✓</td><td>✓</td><td>coarse</td><td>5.81</td><td>57.1</td><td>45.7</td><td>33.2</td><td>55.5</td><td>31.2</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>5.70</td><td>59.5</td><td>47.2</td><td>34.6</td><td>55.6</td><td>31.8</td></tr></table>
|
| 208 |
+
|
| 209 |
+
Table 6. Ablation study for the multi-level contrastive pre-training. OBJ-CL: object-level contrastive learning. CLIP-CL: knowledge distillation using CLIP visual features from ground-truth view. FG-CL: fine-grained contrastive learning for long text understanding.
|
| 210 |
+
|
| 211 |
+
Pre-training is essential for generalizable feature fields model. Table 6 analyzes the impact of multi-level contrastive pre-training on downstream embodied tasks. As shown in row 1 of Table 6, the performance on VLN and object navigation drops significantly when the model is optimized solely by the navigation loss [2] without pre-training.
|
| 212 |
+
|
| 213 |
+
Both CLIP distillation and language supervision are indispensable. For row 3 of Table 6 without supervision from the CLIP visual features, the VLN performance lags behind the model distilled by CLIP. This suggests that millions of language annotations are still far from sufficient for g3D-LF pre-training, and distilling representations from 2D foundation models to enhance semantic generalization remains necessary. However, in Table 6, we can also see that language supervision significantly improves g3D-LF performance on embodied tasks, the model performs poorly in row 2 when using only CLIP distillation.
|
| 214 |
+
|
| 215 |
+
Long-tail distribution limits object-level semantic learning. As shown in row 4 of Table 6, the performance of object navigation decreases drastically without the balanced loss mentioned in Section 3.3. The long-tail distribution of object categories in indoor environments leads models to overlook of rare or small objects such as towels and cups, significantly limiting the ability of our g3D-LF model to query target objects. Fortunately, row 6 of Table 6 shows that the balanced object alignment works well by balancing the weight for loss of hard-to-recognize objects.
|
| 216 |
+
|
| 217 |
+
Fine-grained contrastive benefits long text understanding. In the row 5 of Table 6, we use the [SEP] feature (single vector) from the CLIP text encoder to supervise panorama and BEV representations. However, compared to the fine-grained contrastive learning in row 6, compressing long text
|
| 218 |
+
|
| 219 |
+
into a coarse vector significantly limits g3D-LF's performance on long-text understanding tasks such as VLN. As shown in Figure 2, fine-grained contrastive learning between long texts and windows within the BEV map helps g3D-LF understand spatial layouts, overcoming the limitations of semantic representation in large-scale scenes.
|
| 220 |
+
|
| 221 |
+
<table><tr><td>Rays for View</td><td>View</td><td>Panorama</td><td>Rays for BEV</td><td>BEV</td></tr><tr><td>73.6 FPS</td><td>71.1 FPS</td><td>5.9 FPS</td><td>6.3 FPS</td><td>6.1 FPS</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 7. Runtime analysis measured on one RTX 4090 GPU. FPS denotes Frames Per Second.
|
| 224 |
+
|
| 225 |
+
g3D-LF enables real-time inference. As shown in Table 7, we calculate the inference time of our g3D-LF model on the val unseen split of the R2R-CE dataset in the VLN task. Our g3D-LF achieves novel view volume rendering at 73.6 FPS, which slightly drops to 71.1 FPS when rays are further encoded by the View Encoder. For a panorama containing 12 views, the inference speed is 5.9 FPS. Due to the large rendered range, our g3D-LF renders BEV maps at 6.3 FPS, which drops slightly to 6.1 FPS with the BEV Map Encoder. Our g3D-LF model adopts the same sparse sampling strategy as in HNR [56], where the MLP network is only used to render sampled regions containing feature points nearby, while skipping empty regions. This reduces rendering time by over 10 times, enabling real-time embodied tasks.
|
| 226 |
+
|
| 227 |
+
# 5. Conclusion
|
| 228 |
+
|
| 229 |
+
In this work, we propose Generalizable 3D-Language Feature Fields (g3D-LF), a 3D representation model pre-trained on large-scale 3D-language data for embodied tasks. We organize the first large-scale 3D-language dataset for feature fields training, demonstrating the feasibility of using generalizable feature fields for large-scale scene understanding, i.e., panorama and BEV. Our proposed g3D-LF leverages multi-level contrastive learning strategies such as balanced object semantic alignment, fine-grained text alignment, and CLIP knowledge distillation to optimize generalized feature fields. More importantly, the value of g3D-LF has been widely evaluated in multiple embodied tasks. We believe that our g3D-LF can provide sufficient inspiration for subsequent research on feature fields and embodied AI.
|
| 230 |
+
|
| 231 |
+
Limitations and future works. Our g3D-LF still has some limitations with significant potential for future research: 1) g3D-LF cannot be adapted to dynamic environments, where objects or people are moving in real time. 2) g3D-LF has not been evaluated on dynamic tasks such as object manipulation. 3) The scale and quality of 3D-language data used for training g3D-LF remain limited, which essentially restricts the ability of generalizable feature field models. 4) The 3D feature fields combined with LLM can enable better text generation. These may become the guiding directions for the next phase of generalizable feature fields.
|
| 232 |
+
|
| 233 |
+
Acknowledgement. This research work is supported by the Tier 2 grant MOE-T2EP20124-0015 from the Singapore Ministry of Education.
|
| 234 |
+
|
| 235 |
+
# References
|
| 236 |
+
|
| 237 |
+
[1] Dong An, Yuankai Qi, Yangguang Li, Yan Huang, Liang Wang, Tieniu Tan, and Jing Shao. Bevbert: Multimodal map pre-training for language-guided navigation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2737-2748, 2023. 2, 7
|
| 238 |
+
[2] Dong An, Hanqing Wang, Wenguan Wang, Zun Wang, Yan Huang, Keji He, and Liang Wang. Etpnav: Evolving topological planning for vision-language navigation in continuous environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 2, 5, 7, 8
|
| 239 |
+
[3] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3674-3683, 2018. 1, 2, 6
|
| 240 |
+
[4] Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoaki Kawanabe. Scanqa: 3d question answering for spatial scene understanding. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19129-19139, 2022. 1, 3, 7
|
| 241 |
+
[5] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niebner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. In International Conference on 3D Vision (3DV), 2017. 6
|
| 242 |
+
[6] Devendra Singh Chaplot, Dhiraj Prakashchand Gandhi, Abhinav Gupta, and Russ R Salakhutdinov. Object goal navigation using goal-oriented semantic exploration. Advances in Neural Information Processing Systems, 33:4247-4258, 2020. 1, 2
|
| 243 |
+
[7] Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural language. In European conference on computer vision, pages 202-221. Springer, 2020. 2, 3
|
| 244 |
+
[8] Peihao Chen, Dongyu Ji, Kunyang Lin, Runhao Zeng, Thomas Li, Mingkui Tan, and Chuang Gan. Weakly-supervised multi-granularity map learning for vision-and-language navigation. Advances in Neural Information Processing Systems, 35:38149-38161, 2022. 6
|
| 245 |
+
[9] Shizhe Chen, Pierre-Louis Guhur, Cordelia Schmid, and Ivan Laptev. History aware multimodal transformer for vision-and-language navigation. Advances in neural information processing systems, 34:5834-5847, 2021. 2
|
| 246 |
+
[10] Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, and Ivan Laptev. Think global, act local: Dual-scale graph transformer for vision-and-language navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16537-16547, 2022. 2, 5
|
| 247 |
+
[11] Yilun Chen, Shuai Yang, Haifeng Huang, Tai Wang, Ruiyuan Lyu, Runsen Xu, Dahua Lin, and Jiangmiao Pang. Grounded
|
| 248 |
+
|
| 249 |
+
3d-llm with referent tokens. arXiv preprint arXiv:2405.10370, 2024. 1
|
| 250 |
+
[12] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828-5839, 2017. 3, 6
|
| 251 |
+
[13] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-10, 2018. 3
|
| 252 |
+
[14] Rao Fu, Jingyu Liu, Xilun Chen, Yixin Nie, and Wenhan Xiong. Scene-llm: Extending language model for 3d visual understanding and reasoning. arXiv preprint arXiv:2403.11401, 2024. 1, 3, 7
|
| 253 |
+
[15] Samir Yitzhak Gadre, Mitchell Wortsman, Gabriel Ilharco, Ludwig Schmidt, and Shuran Song. Cows on pasture: Baselines and benchmarks for language-driven zero-shot object navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23171-23181, 2023. 2
|
| 254 |
+
[16] Georgios Georgakis, Karl Schmeckpeper, Karan Wanchoo, Soham Dan, Eleni Miltsakaki, Dan Roth, and Kostas Dani-ilidis. Cross-modal map learning for vision and language navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15460-15470, 2022. 6
|
| 255 |
+
[17] Theophile Gervet, Soumith Chintala, Dhruv Batra, Jitendra Malik, and Devendra Singh Chaplot. Navigating to objects in the real world. Science Robotics, 8(79):eadf6991, 2023. 2
|
| 256 |
+
[18] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017. 2
|
| 257 |
+
[19] Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, and Stephen Gould. Vln bert: A recurrent vision-and-language bert for navigation. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 1643–1653, 2021. 2
|
| 258 |
+
[20] Yicong Hong, Zun Wang, Qi Wu, and Stephen Gould. Bridging the gap between learning in discrete and continuous environments for vision-and-language navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 5, 7
|
| 259 |
+
[21] Yicong Hong, Yang Zhou, Ruiyi Zhang, Franck Dernoncourt, Trung Bui, Stephen Gould, and Hao Tan. Learning navigational visual representations with semantic map supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3055-3067, 2023. 7
|
| 260 |
+
[22] Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. An embodied generalist agent in 3d world. In Proceedings of the International Conference on Machine Learning (ICML), 2024. 1, 3, 7
|
| 261 |
+
[23] Baoxiong Jia, Yixin Chen, Huangyue Yu, Yan Wang, Xuesong Niu, Tengyu Liu, Qing Li, and Siyuan Huang. Sceneverse: Scaling 3d vision-language learning for grounded scene understanding. In European Conference on Computer Vision (ECCV), 2024. 2, 3, 7
|
| 262 |
+
|
| 263 |
+
[24] Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. Lerf: Language embedded radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19729-19739, 2023. 2
|
| 264 |
+
[25] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015-4026, 2023. 2
|
| 265 |
+
[26] Jacob Krantz and Stefan Lee. Sim-2-sim transfer for vision-and-language navigation in continuous environments. In European Conference on Computer Vision (ECCV), 2022. 2, 7
|
| 266 |
+
[27] Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, and Stefan Lee. Beyond the nav-graph: Vision-and-language navigation in continuous environments. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXVIII 16, pages 104-120. Springer, 2020. 1, 2, 6
|
| 267 |
+
[28] Obin Kwon, Jeongho Park, and Songhwai Oh. Rendering neural radiance map for visual navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9099-9108, 2023. 1
|
| 268 |
+
[29] Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, and Jingjing Liu. Less is more: Clipbert for video-and-language learning via sparse sampling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7331-7341, 2021. 7
|
| 269 |
+
[30] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2, 5, 7
|
| 270 |
+
[31] Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10965-10975, 2022. 2
|
| 271 |
+
[32] Rui Liu, Xiaohan Wang, Wenguan Wang, and Yi Yang. Bird's-eye-view scene graph for vision-language navigation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10968-10980, 2023. 2
|
| 272 |
+
[33] Rui Liu, Wenguan Wang, and Yi Yang. Vision-language navigation with energy-based policy. In Advances in Neural Information Processing Systems, 2024. 7
|
| 273 |
+
[34] Rui Liu, Wenguan Wang, and Yi Yang. Volumetric environment representation for vision-language navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16317-16328, 2024. 1
|
| 274 |
+
[35] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 2
|
| 275 |
+
[36] Yuxing Long, Wenzhe Cai, Hongcheng Wang, Guanqi Zhan, and Hao Dong. Instructnav: Zero-shot system for generic
|
| 276 |
+
|
| 277 |
+
instruction navigation in unexplored environment. In 8th Annual Conference on Robot Learning, 2024. 6, 7
|
| 278 |
+
[37] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. In The Eleventh International Conference on Learning Representations, 2023. 1, 2, 3, 6, 7
|
| 279 |
+
[38] Arjun Majumdar, Gunjan Aggarwal, Bhavika Devnani, Judy Hoffman, and Dhruv Batra. Zson: Zero-shot object-goal navigation using multimodal goal embeddings. Advances in Neural Information Processing Systems, 35:32340-32352, 2022. 1, 2, 7
|
| 280 |
+
[39] Arjun Majumdar, Anurag Ajay, Xiaohan Zhang, Pranav Putta, Sriram Yenamandra, Mikael Henaff, Sneha Silwal, Paul Mcvay, Oleksandr Maksymets, Sergio Arnaud, et al. Openaq: Embodied question answering in the era of foundation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16488-16498, 2024. 1, 3
|
| 281 |
+
[40] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 2
|
| 282 |
+
[41] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. Transactions on Machine Learning Research Journal, pages 1-31, 2024. 1
|
| 283 |
+
[42] Yanyuan Qiao, Yuankai Qi, Yicong Hong, Zheng Yu, Peng Wang, and Qi Wu. Hop+: History-enhanced and order-aware pre-training for vision-and-language navigation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7): 8524-8537, 2023. 2
|
| 284 |
+
[43] Ri-Zhao Qiu, Yafei Hu, Ge Yang, Yuchen Song, Yang Fu, Jianglong Ye, Jiteng Mu, Ruihan Yang, Nikolay Atanasov, Sebastian Scherer, et al. Learning generalizable feature fields for mobile manipulation. arXiv preprint arXiv:2403.07563, 2024. 1, 2
|
| 285 |
+
[44] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 1, 2, 5
|
| 286 |
+
[45] Santhosh Kumar Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alexander Clegg, John M Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, et al. Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). 3, 6
|
| 287 |
+
[46] Santhosh Kumar Ramakrishnan, Devendra Singh Chaplot, Ziad Al-Halah, Jitendra Malik, and Kristen Grauman. Poni: Potential functions for objectgoal navigation with interaction-free learning. In Proceedings of the IEEE/CVF Conference
|
| 288 |
+
|
| 289 |
+
on Computer Vision and Pattern Recognition, pages 18890-18900, 2022. 2
|
| 290 |
+
[47] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al. Habitat: A platform for embodied ai research. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9339-9347, 2019. 6
|
| 291 |
+
[48] William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, and Phillip Isola. Distilled feature fields enable few-shot language-guided manipulation. In Proceedings of The 7th Conference on Robot Learning, pages 405–424. PMLR, 2023. 1
|
| 292 |
+
[49] Francesco Taioli, Federico Cunico, Federico Girella, Riccardo Bologna, Alessandro Farinelli, and Marco Cristani. Language-enhanced rnr-map: Querying renderable neural radiance field maps with natural language. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4669-4674, 2023. 2
|
| 293 |
+
[50] Suhani Vora, Noha Radwan, Klaus Greff, Henning Meyer, Kyle Genova, Mehdi SM Sajjadi, Etienne Pot, Andrea Tagliasacchi, and Daniel Duckworth. Nesf: Neural semantic fields for generalizable semantic segmentation of 3d scenes. Transactions on Machine Learning Research. 2
|
| 294 |
+
[51] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7464-7475, 2023. 2
|
| 295 |
+
[52] Hanqing Wang, Wei Liang, Luc Van Gool, and Wenguan Wang. Dreamwalker: Mental planning for continuous vision-language navigation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10873-10883, 2023. 7
|
| 296 |
+
[53] Liuyi Wang, Zongtao He, Ronghao Dang, Mengjiao Shen, Chengju Liu, and Qijun Chen. Vision-and-language navigation via causal learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13139-13150, 2024. 2
|
| 297 |
+
[54] Zun Wang, Jialu Li, Yicong Hong, Yi Wang, Qi Wu, Mohit Bansal, Stephen Gould, Hao Tan, and Yu Qiao. Scaling data generation in vision-and-language navigation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12009-12020, 2023. 7
|
| 298 |
+
[55] Zihan Wang, Xiangyang Li, Jiahao Yang, Yeqi Liu, and Shuqiang Jiang. Gridmm: Grid memory map for vision-and-language navigation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15625-15636, 2023. 2, 5, 7
|
| 299 |
+
[56] Zihan Wang, Xiangyang Li, Jiahao Yang, Yeqi Liu, Junjie Hu, Ming Jiang, and Shuqiang Jiang. Lookahead exploration with neural radiance representation for continuous vision-language navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13753-13762, 2024. 1, 2, 3, 5, 6, 7, 8
|
| 300 |
+
[57] Zihan Wang, Xiangyang Li, Jiahao Yang, Yeqi Liu, and Shuqiang Jiang. Sim-to-real transfer via 3d feature fields
|
| 301 |
+
|
| 302 |
+
for vision-and-language navigation. In 8th Annual Conference on Robot Learning, 2024. 1, 2, 5, 6, 7
|
| 303 |
+
[58] Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra. Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames. arXiv preprint arXiv:1911.00357, 2019. 2
|
| 304 |
+
[59] Karmesh Yadav, Ram Ramrakhya, Santhosh Kumar Ramakrishnan, Theo Gervet, John Turner, Aaron Gokaslan, Noah Maestre, Angel Xuan Chang, Dhruv Batra, Manolis Savva, et al. Habitat-matterport 3d semantics dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4927-4936, 2023. 3, 6
|
| 305 |
+
[60] Hang Yin, Xiuwei Xu, Zhenyu Wu, Jie Zhou, and Jiwen Lu. Sg-nav: Online 3d scene graph prompting for llm-based zero-shot object navigation. In Advances in Neural Information Processing Systems, 2024. 7
|
| 306 |
+
[61] Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, and Bernadette Bucher. Vlfm: Vision-language frontier maps for zero-shot semantic navigation. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 42-48. IEEE, 2024. 1, 2, 5, 7
|
| 307 |
+
[62] Shuaihang Yuan, Hao Huang, Yu Hao, Congcong Wen, Anthony Tzes, and Yi Fang. Gamap: Zero-shot object goal navigation with multi-scale geometric-affordance guidance. In Advances in Neural Information Processing Systems, 2024. 7
|
| 308 |
+
[63] Yanjie Ze, Ge Yan, Yueh-Hua Wu, Annabella Macaluso, Yuying Ge, Jianglong Ye, Nicklas Hansen, Li Erran Li, and Xiaolong Wang. Gnfactor: Multi-task real robot learning with generalizable neural feature fields. In Conference on Robot Learning, pages 284-301. PMLR, 2023. 1
|
| 309 |
+
[64] Chaoning Zhang, Dongshen Han, Yu Qiao, Jung Uk Kim, Sung-Ho Bae, Seungkyu Lee, and Choong Seon Hong. Faster segment anything: Towards lightweight sam for mobile applications. arXiv preprint arXiv:2306.14289, 2023. 2
|
| 310 |
+
[65] Haochen Zhang, Nader Zantout, Pujith Kachana, Zongyuan Wu, Ji Zhang, and Wenshan Wang. Vla-3d: A dataset for 3d semantic scene understanding and navigation. arXiv preprint arXiv:2411.03540, 2024. 2
|
| 311 |
+
[66] Jiazhao Zhang, Kunyu Wang, Rongtao Xu, Gengze Zhou, Yicong Hong, Xiaomeng Fang, Qi Wu, Zhizheng Zhang, and He Wang. Nvid: Video-based vlm plans the next step for vision-and-language navigation. In Proceedings of Robotics: Science and Systems (RSS), 2024. 6
|
| 312 |
+
[67] Sixian Zhang, Xinhang Song, Yubing Bai, Weijie Li, Yakui Chu, and Shuqiang Jiang. Hierarchical object-to-zone graph for object navigation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 15130-15140, 2021. 2
|
| 313 |
+
[68] Yue Zhang, Ziqiao Ma, Jialu Li, Yanyuan Qiao, Zun Wang, Joyce Chai, Qi Wu, Mohit Bansal, and Parisa Kordjamshidi. Vision-and-language navigation today and tomorrow: A survey in the era of foundation models. arXiv preprint arXiv:2407.07035, 2024. 2
|
| 314 |
+
[69] Jia Zheng, Junfei Zhang, Jing Li, Rui Tang, Shenghua Gao, and Zihan Zhou. Structured3d: A large photo-realistic dataset for structured 3d modeling. In Proceedings of The European Conference on Computer Vision (ECCV), 2020. 3, 6
|
| 315 |
+
|
| 316 |
+
[70] Kaiwen Zhou, Kaizhi Zheng, Connor Pryor, Yilin Shen, Hongxia Jin, Lise Getoor, and Xin Eric Wang. Esc: Exploration with soft commonsense constraints for zero-shot object navigation. In International Conference on Machine Learning, pages 42829-42842. PMLR, 2023. 2, 7
|
| 317 |
+
[71] Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In 2017 IEEE international conference on robotics and automation (ICRA), pages 3357-3364. IEEE, 2017. 2
|
| 318 |
+
[72] Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li. 3d-vista: Pre-trained transformer for 3d vision and text alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2911-2921, 2023. 1, 7
|
CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:18e6b7a938a3315d597b228b2b1557b28ccafb54b46f864f964abff49b0db656
|
| 3 |
+
size 623168
|
CVPR/2025/g3D-LF_ Generalizable 3D-Language Feature Fields for Embodied Tasks/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3432db4e7addf3e40bdbbc999d8729b4e65e36f79d2fde52f9d987279ebc7abf
|
| 3 |
+
size 401489
|
CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/a31f06fe-0eb5-4f9e-ab73-37feb4a38682_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7648aebcb62a21b3fb42565ebdc17d326163f527c8ebba085b3b3b727769957e
|
| 3 |
+
size 92834
|
CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/a31f06fe-0eb5-4f9e-ab73-37feb4a38682_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a6f8005032b1628cc366ff9fe04b1df4cdf83c8b333c39e8399e7c9979c3c66
|
| 3 |
+
size 119625
|
CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/a31f06fe-0eb5-4f9e-ab73-37feb4a38682_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4756b0cf5cb2ec924ff5ce97d607acb647186133fbf4a91e5e462e87af66ffdc
|
| 3 |
+
size 9417236
|
CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/full.md
ADDED
|
@@ -0,0 +1,440 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# h-Edit: Effective and Flexible Diffusion-Based Editing via Doob's $h$ -Transform
|
| 2 |
+
|
| 3 |
+
Toan Nguyen*
|
| 4 |
+
|
| 5 |
+
Kien Do*
|
| 6 |
+
|
| 7 |
+
Duc Kieu
|
| 8 |
+
|
| 9 |
+
Thin Nguyen
|
| 10 |
+
|
| 11 |
+
{k.nguyen, k.do, v.kieu, thin.nguyen}@deakin.edu.au
|
| 12 |
+
|
| 13 |
+
Applied Artificial Intelligence Institute (A2I2), Deakin University, Australia
|
| 14 |
+
|
| 15 |
+
* Equal contribution
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1. Qualitative comparison between $h$ -Edit and other training-free editing baselines. Our method achieves more accurate and faithful edits than the baselines. Additional visualizations are provided in the Appendix.
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+
# Abstract
|
| 23 |
+
|
| 24 |
+
We introduce a theoretical framework for diffusion-based image editing by formulating it as a reverse-time bridge modeling problem. This approach modifies the backward process of a pretrained diffusion model to construct a bridge that converges to an implicit distribution associated with the editing target at time 0. Building on this framework, we propose h-Edit, a novel editing method that utilizes Doob's h-transform and Langevin Monte Carlo to decompose the update of an intermediate edited sample into two components: a "reconstruction" term and an "editing" term. This decomposition provides flexibility, allowing the reconstruction term to be computed via existing inversion techniques and enabling the combination of multiple editing terms to handle complex editing tasks. To our knowledge, h-Edit is the first training-free method capable of performing simultaneous text-guided and reward-model-based editing. Extensive experiments, both quantitative and qualitative, show that h-Edit outperforms state-of-the-art baselines in terms of editing effectiveness and faithfulness.
|
| 25 |
+
|
| 26 |
+
# 1. Introduction
|
| 27 |
+
|
| 28 |
+
Diffusion models [22, 62, 65] have established themselves as a powerful class of generative models, achieving state-of-the-art performance in image generation [64]. When combined with classifier-based [12] or classifier-free guidance [21], these models offer enhanced control, enabling a wide range of applications including conditional generation [79, 80], image-to-image translation [8, 56], and image editing [19, 23, 44]. A prominent example is large-scale text-guided diffusion models [47, 57] like Stable Diffusion (SD) [55], which have gained widespread popularity for their ability to produce diverse high-quality images that closely align with specified natural language descriptions.
|
| 29 |
+
|
| 30 |
+
However, leveraging pretrained text-guided diffusion models for image editing presents significant challenges, particularly in balancing effective editing with faithful preservation of the unrelated content in the original image. Moreover, combining text-guided editing with other forms of editing to handle more complex requirements remains a difficult task. Although recent advances in training-free image editing have been proposed [7, 19, 24, 27, 46, 70], most
|
| 31 |
+
|
| 32 |
+
of these efforts focus on improving reconstruction quality through better inversion techniques or attention map adjustment, while leaving the editing part largely unchanged. Additionally, many of these methods are based on heuristics or intuition, lacking a clear theoretical foundation to justify their effectiveness. This limitation restricts the generalization of these approaches to more complex scenarios where multiple types of editing must be applied.
|
| 33 |
+
|
| 34 |
+
In this work, we aim to fill the theoretical gap by introducing a theoretical framework for image editing, formulated as a reverse-time bridge modeling problem. Our approach modifies the backward process of a pretrained diffusion model using Doob's $h$ -transform [15, 54, 58] to create a bridge that converges to the distribution $p(x_0)h(x_0,0)$ at time 0. Here, $p(x_0)$ represents the realism of $x_0$ , while $h(x_0,0)$ captures the probability that $x_0$ has the target property. To perform editing, we first map the original image $x_0^{\mathrm{orig}}$ to its prior $x_T^{\mathrm{orig}}$ through the diffusion forward process. Starting from $x_T^{\mathrm{edit}} = x_T^{\mathrm{orig}}$ , we follow the bridge to generate an edited image $x_0^{\mathrm{edit}}$ by sampling from its transition kernel $p^h (x_{t - 1}|x_t)$ using Langevin Monte Carlo (LMC) [53, 74].
|
| 35 |
+
|
| 36 |
+
Building on the decomposability of $p^h (x_{t - 1}|x_t)$ , we propose $h$ -Edit - a novel editing method that disentangles the update of $x_{t - 1}^{\mathrm{edit}}$ into a "reconstruction" term $x_{t - 1}^{\mathrm{base}}$ (capturing editing faithfulness) and an "editing" term (capturing editing effectiveness). This design provides significant flexibility, as the editing term can be easily customized for different tasks with minimal interference in non-edited regions. $h$ -Edit updates can be either explicit or implicit, with $\nabla \log h(x_{t},t)$ and $\nabla \log h(x_{t - 1},t - 1)$ being the corresponding editing terms, respectively. In the latter case, $h$ -Edit can also be interpreted from an optimization perspective where $\log h(x_{t - 1},t - 1)$ is maximized w.r.t. $x_{t - 1}$ , taking $x_{t - 1}^{\mathrm{base}}$ as the initial value. This allows for multiple optimization steps to enhance editing effectiveness.
|
| 37 |
+
|
| 38 |
+
While $x_{t-1}^{\mathrm{base}}$ can generally be estimated by leveraging existing inversion techniques [24, 27, 46, 64], the computation of $\nabla \log h(x_{t-1}, t-1)$ depends on the chosen $h$ -function. In this work, we present several key designs of the $h$ -function tailored to popular editing tasks, including text-guided editing with SD and editing with external reward models on clean data. Furthermore, by treating $\log h$ as a negative energy function, we can easily combine multiple $h$ -functions to create a "product of $h$ -experts", which enables compositional editing.
|
| 39 |
+
|
| 40 |
+
Through extensive experiments on a range of editing tasks - including text-guided editing, combined text-guided and style editing, and face swapping - we demonstrate strong editing capabilities of $h$ -Edit. Both quantitative and qualitative results indicate that $h$ -Edit not only significantly outperforms existing state-of-the-art methods in text-guided editing but also excels in the two other tasks. Our method effectively handles various difficult editing cases in the PIE-
|
| 41 |
+
|
| 42 |
+
Bench dataset where existing methods fall short. To our knowledge, $h$ -Edit is the first diffusion-based training-free editing method that supports simultaneous text-guided and reward-model-based editing.
|
| 43 |
+
|
| 44 |
+
# 2. Preliminaries
|
| 45 |
+
|
| 46 |
+
# 2.1. Diffusion Models
|
| 47 |
+
|
| 48 |
+
Diffusion models [22, 62, 65] iteratively transform the data distribution $p(x_0)$ into the prior distribution $p(x_{T}) = \mathcal{N}(0,\mathrm{I})$ via a predefined forward stochastic process characterized by $p(x_{t}|x_{t - 1})$ , and learn the reverse transition distribution $p_{\theta}\left(x_{t - 1}|x_t\right)$ to map $p(x_{T})$ back to $p(x_0)$ . Given the Gaussian form and Markov property of $p(x_{t}|x_{t - 1})$ , $p(x_{t}|x_{0})$ is a Gaussian distribution $\mathcal{N}\left(a_{t}x_{0},\sigma_{t}^{2}\mathrm{I}\right)$ , allowing $x_{t}$ to be sampled from $p(x_{t}|x_{0})$ as follows:
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
x _ {t} = a _ {t} x _ {0} + \sigma_ {t} \epsilon \tag {1}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
with $\epsilon \sim \mathcal{N}(0,\mathrm{I})$ . In DDPM [22], $a_{t} = \sqrt{\bar{\alpha}_{t}}$ and $\sigma_t = \sqrt{1 - \bar{\alpha}_t}$ . $p_{\theta}(x_{t - 1}|x_t)$ is parameterized as a Gaussian distribution $\mathcal{N}\left(\mu_{\theta ,\omega ,t,t - 1}(x_t),\omega_{t,t - 1}^2\mathrm{I}\right)$ with the mean
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
\begin{array}{l} \mu_ {\theta , \omega , t, t - 1} \left(x _ {t}\right) := \\ \frac {a _ {t - 1}}{a _ {t}} x _ {t} + \left(\sqrt {\sigma_ {t - 1} ^ {2} - \omega_ {t , t - 1} ^ {2}} - \frac {\sigma_ {t} a _ {t - 1}}{a _ {t}}\right) \epsilon_ {\theta} (x _ {t}, t) \tag {2} \\ \end{array}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
Here, $\omega_{t,t - 1} = \lambda \sigma_{t - 1}\sqrt{1 - \frac{a_t^2\sigma_{t - 1}^2}{a_{t - 1}^2\sigma_t^2}}$ with $\lambda \in [0,1]$ . $\lambda = 0$ and $\lambda = 1$ correspond to DDIM sampling [64] and DDPM sampling [22], respectively. Eq. 2 implies that $x_{t - 1} \sim p_{\theta}(x_{t - 1}|x_t)$ is given by:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
x _ {t - 1} = \mu_ {\theta , \omega , t, t - 1} \left(x _ {t}\right) + \omega_ {t, t - 1} z _ {t} \tag {3}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
with $z_{t} \sim \mathcal{N}(0, \mathrm{I})$ . Diffusion models support conditional generation via classifier-based [12] and classifier-free [21] guidances. The latter is more prevalent, with Stable Diffusion (SD) [55] serving as a notable example. In SD, both the unconditional and text-conditional noise networks - $\epsilon_{\theta}(x_t, t, \emptyset)$ and $\epsilon_{\theta}(x_t, t, c)$ - are learned, and their linear combination $\tilde{\epsilon}_{\theta}(x_t, t, c) \coloneqq w \epsilon_{\theta}(x_t, t, c) + (1 - w) \epsilon_{\theta}(x_t, t, \emptyset)$ , with $w > 0$ denoting the guidance weight, is often used for sampling. This results in the following sampling step for SD:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
x _ {t - 1} = \tilde {\mu} _ {\theta , \omega , t, t - 1} \left(x _ {t}, c\right) + \omega_ {t, t - 1} z _ {t} \tag {4}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
where $\tilde{\mu}_{\theta,\omega,t,t-1}$ follows the same form as $\mu_{\theta,\omega,t,t-1}(x_t)$ in Eq. 2 but with $\epsilon_{\theta}(x_t,t)$ replaced by $\tilde{\epsilon}_{\theta}(x_t,t,c)$ .
|
| 73 |
+
|
| 74 |
+
# 2.2. Image Editing with Stable Diffusion
|
| 75 |
+
|
| 76 |
+
The design of SD facilitates text-guided image editing which involves modifying some attributes of the original
|
| 77 |
+
|
| 78 |
+
image $x_0^{\mathrm{orig}}$ while preserving other features (e.g., background) by adjusting the corresponding text prompt $c^{\mathrm{orig}}$ . A naive approach is mapping $x_0^{\mathrm{orig}}$ to $x_T^{\mathrm{orig}}$ using DDIM inversion w.r.t. $c^{\mathrm{orig}}$ , followed by generating $x_0^{\mathrm{edit}}$ from $x_T^{\mathrm{edit}} = x_T^{\mathrm{orig}}$ via DDIM sampling (Eq. 4) w.r.t. $c^{\mathrm{edit}}$ - the edited version of $c^{\mathrm{orig}}$ . DDIM inversion is the reverse of DDIM sampling, which achieves nearly exact reconstruction in the unconditional case [19, 64]. For SD, DDIM inversion is expressed as:
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
x _ {t} = \frac {a _ {t}}{a _ {t - 1}} x _ {t - 1} + \left(\sigma_ {t} - \frac {\sigma_ {t - 1} a _ {t}}{a _ {t - 1}}\right) \tilde {\epsilon} _ {\theta} (x _ {t - 1}, t - 1, c) \tag {5}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
However, there is a mismatch between $\tilde{\epsilon}_{\theta}\left(x_{t}, t, c^{\mathrm{edit}}\right)$ and $\tilde{\epsilon}_{\theta}\left(x_{t-1}, t-1, c^{\mathrm{orig}}\right)$ during sampling and inversion, causing $x_{0}^{\mathrm{edit}}$ to be significantly different from $x_{0}^{\mathrm{orig}}$ . Therefore, much of the research on SD text-guided image editing focuses on improving reconstruction. These inversion methods can be broadly classified into deterministic-inversion-based [14, 27, 38, 46] and random-inversion-based [24, 75] techniques. Edit Friendly (EF) [24] - a state-of-the-art random-inversion-based method - can be formulated under the following framework:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
u _ {t} ^ {\text {o r i g}} = x _ {t - 1} ^ {\text {o r i g}} - \tilde {\mu} _ {\theta , \omega , t, t - 1} \left(x _ {t} ^ {\text {o r i g}}, c ^ {\text {o r i g}}\right) \tag {6}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
x _ {t - 1} ^ {\mathrm {e d i t}} = \tilde {\mu} _ {\theta , \omega , t, t - 1} \left(x _ {t} ^ {\mathrm {e d i t}}, c ^ {\mathrm {e d i t}}\right) + u _ {t} ^ {\mathrm {o r i g}} \tag {7}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
Here, $u_{t}^{\mathrm{orig}}$ serves as a residual term that ensures non-edited features from $x_{t-1}^{\mathrm{orig}}$ are retained in the edited version $x_{t-1}^{\mathrm{edit}}$ . For EF, the set $\left\{x_{t}^{\mathrm{orig}}\right\}_{t=1}^{T}$ is constructed by sampling $x_{t}^{\mathrm{orig}}$ from $p\left(x_{t}|x_{0}^{\mathrm{orig}}\right)$ for each $t$ in parallel. Interestingly, this set can also be built sequentially through DDIM inversion as per Eq. 5 (with $c^{\mathrm{orig}}$ replacing $c$ ).
|
| 95 |
+
|
| 96 |
+
# 2.3. Diffusion Bridges and Doob's $h$ -transform
|
| 97 |
+
|
| 98 |
+
Although various definitions of bridges exist in the literature [10, 32, 36, 39, 42, 67], we adopt the perspective of [32, 41, 85] and regard bridges as special stochastic processes that converge to a predefined sample $\hat{x}_T$ at time $T$ almost surely. A bridge can be derived from a base (or reference) Markov process through Doob's $h$ -transform [15, 54, 58]. If the base process is a diffusion process described by the SDE $dx_{t} = f(x_{t},t)dt + g(t)dw_{t}$ , the corresponding bridge is governed by the following SDE:
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
d x _ {t} = \left(f \left(x _ {t}, t\right) + g (t) ^ {2} \nabla \log h \left(x _ {t}, t\right)\right) d t + g (t) d w _ {t} \tag {8}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
where $h(x_{t}, t) = p(\hat{x}_{T} | x_{t})$ . When $f(x_{t}, t)$ is a linear function of $x_{t}$ , $h(x_{t}, t)$ simplifies into a Gaussian distribution that can be expressed in closed form [85].
|
| 105 |
+
|
| 106 |
+
# 3. Method
|
| 107 |
+
|
| 108 |
+
# 3.1. Editing as Reverse-time Bridge Modeling
|
| 109 |
+
|
| 110 |
+
In this section, we introduce a novel theoretical framework for image editing with diffusion models by framing it as a reverse-time bridge modeling problem. This idea stems from our insight that we can generate images $x_0$ exhibiting the target properties $\mathcal{V}$ (e.g., style, shape, color, object type, ...) by constructing a bridge from the backward process that converges to an implicit distribution associated with $\mathcal{V}$ . Our framework stands apart from most existing bridge models [41, 63, 85] which focus solely on the (nonparameterized) forward process and assume an explicit target sample $\hat{x}_0$ (or set of samples $\{\hat{x}_0\}$ ).
|
| 111 |
+
|
| 112 |
+
To construct this bridge, we modify the transition distribution $p_{\theta}(x_{t - 1}|x_t)$ of the backward process using Doob's $h$ -transform [15, 58] as follows:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
p _ {\theta} ^ {h} \left(x _ {t - 1} | x _ {t}\right) = p _ {\theta} \left(x _ {t - 1} | x _ {t}\right) \frac {h \left(x _ {t - 1} , t - 1\right)}{h \left(x _ {t} , t\right)} \tag {9}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
Here, $h(x_{t}, t)$ is a positive real-valued function that satisfies the following conditions for all $t \in [1, T]$ :
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
h \left(x _ {t}, t\right) = \int p _ {\theta} \left(x _ {t - 1} \mid x _ {t}\right) h \left(x _ {t - 1}, t - 1\right) d x _ {t - 1} \tag {10}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
h \left(x _ {0}, 0\right) = p _ {\mathcal {Y}} \left(x _ {0}\right) \tag {11}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $p_{\mathcal{Y}}(x_0)$ is a predefined distribution quantifying how likely $x_0$ possesses the attributes $\mathcal{Y}$ . $p_{\mathcal{Y}}(x_0) = 0$ if $x_0$ does not have the attributes $\mathcal{Y}$ and $> 0$ otherwise. For clarity in the subsequent discussion, we will omit the parameter $\theta$ in $p_{\theta}(x_{t - 1}|x_t)$ and $p_{\theta}^{h}(x_{t - 1}|x_t)$ , referring to them simply as $p(x_{t - 1}|x_t)$ and $p^{h}(x_{t - 1}|x_t)$ .
|
| 129 |
+
|
| 130 |
+
It can be shown that $h(x_{t}, t) = \mathbb{E}_{p(x_{0} | x_{t})}[h(x_{0}, 0)]$ (Ap-dx. A.1) and the bridge constructed in this manner forms a reverse-time Markov process with the transition distribution $p^{h}(x_{t-1} | x_{t})$ . At time 0, this process converges to a distribution formally stated in Proposition 1 below:
|
| 131 |
+
|
| 132 |
+
Proposition 1. Consider a reverse-time Markov process with the transition distribution $p(x_{t - 1}|x_t)$ and a positive real-value function $h(x_{t},t)$ satisfying Eqs. 10, 11 for all $t\in [1,T]$ . If we construct a bridge from this Markov process such that its transition distribution $p^h (x_{t - 1}|x_t)$ is defined as in Eq. 9, then the bridge is also a reverse-time Markov process. Moreover, if the distribution at time $T$ of the bridge, $p^h (x_T)$ , is set to $\frac{p(x_T)h(x_T,T)}{\mathbb{E}_{p(x_0)}[h(x_0,0)]}$ , then $p^h (x_t) = \frac{p(x_t)h(x_t,t)}{\mathbb{E}_{p(x_0)}[h(x_0,0)]}$ for all $t\in [0,T]$ .
|
| 133 |
+
|
| 134 |
+
Proof. The detailed proof is provided in Appdx. A.2. $\square$
|
| 135 |
+
|
| 136 |
+
Corollary 1. $p^h (x_0)$ is proportional to $p(x_0)p_{\mathcal{Y}}(x_0)$ .
|
| 137 |
+
|
| 138 |
+
Figure 2. Overview of implicit $h$ -Edit in comparison with PnP Inversion + P2P [27] and Edit Friendly [24].
|
| 139 |
+

|
| 140 |
+
Reconstruction Editing
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
|
| 144 |
+

|
| 145 |
+
|
| 146 |
+
Corollary 1 implies that generated samples from the bridge not only possess the attributes $\mathcal{V}$ but also look real. The realism associated with $p(x_0)$ comes from the base process used to construct the bridge. It can be suppressed if $h(x_0,0)$ is set to $p_{\mathcal{Y}}(x_0) / p(x_0)$ , resulting in $p^h (x_0)\propto p_{\mathcal{Y}}(x_0)$ . More generally, we can specify any target distribution for the bridge to converge to by appropriately selecting $h(x_0,0)$ . This highlights the generalizability of our framework for editing.
|
| 147 |
+
|
| 148 |
+
A notable special case of our framework is when $h(x_0, 0) = p(y|x_0)$ with $y$ being a known attribute (e.g., a class label [12] or a text prompt [55]). In this case, $h(x_t, t) = \mathbb{E}_{p(x_0|x_t)}[p(y|x_0)] = p(y|x_t)$ . Below, we discuss the continuous-time formulation of the bridge for the sake of completeness.
|
| 149 |
+
|
| 150 |
+
Proposition 2. If the base Markov process is characterized by the reverse-time SDE $dx_{t} = \left(f(x_{t},t) - g(t)^{2}\nabla \log p_{t}(x_{t})\right)dt + g(t)d\overline{w}_{t}$ [1, 66], then the bridge constructed from it via Doob's h-transform has the formula:
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\begin{array}{l} d x _ {t} = \left(f (x _ {t}, t) - g (t) ^ {2} (\nabla \log p (x _ {t}) + \nabla \log h (x _ {t}, t))\right) d t \\ + g (t) d \bar {w} _ {t} \tag {12} \\ \end{array}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
# 3.2. $h$ -Edit
|
| 157 |
+
|
| 158 |
+
After constructing the bridge, image editing can be carried out through ancestral sampling from time $T$ to time 0 along the bridge. However, for a general function $h$ , $p^h(x_{t-1}|x_t)$ is typically non-Gaussian, making direct Monte Carlo sampling from this distribution impractical. Therefore, we must rely on Markov Chain Monte Carlo (MCMC) methods, such as Langevin Monte Carlo (LMC) [53, 74], for sampling. LMC is particularly well-suited for diffusion models due to the availability of score functions at every time $t$ .
|
| 159 |
+
|
| 160 |
+
To sample from the (unnormized) target distribution $p^h(x_0) \propto p(x_0) h(x_0, 0)$ , we perform a sequence of LCM
|
| 161 |
+
|
| 162 |
+
updates, with each update defined as follows:
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
\begin{array}{l} x _ {t - 1} \approx x _ {t} + \eta \nabla_ {x _ {t}} \log (p (x _ {t}) h (x _ {t}, t)) + \sqrt {2 \eta} z (13) \\ = \left(x _ {t} + \eta \nabla_ {x _ {t}} \log p (x _ {t}) + \sqrt {2 \eta} z\right) \\ + \eta \nabla_ {x _ {t}} \log h (x _ {t}, t) (14) \\ = \underbrace {x _ {t - 1} ^ {\text {b a s e}}} _ {\text {r e c .}} + \eta \underbrace {\nabla_ {x _ {t}} \log h (x _ {t} , t)} _ {\text {e d i t i n g}} (15) \\ \end{array}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
where $z \sim \mathcal{N}(0, \mathrm{I})$ , $\eta > 0$ is the step size, $x_{t}$ and $x_{t-1}$ denote edited samples at time $t$ and $t - 1$ , respectively. A similar expression to Eq. 15 can be derived by solving the bridge SDE in Eq. 12 using the Euler-Maruyama method [51]. Intuitively, $x_{t-1}$ and $x_{t-1}^{\mathrm{base}}$ can be regarded as samples from $p^{h}(x_{t-1}|x_{t})$ and $p(x_{t-1}|x_{t})$ , respectively. According to the formula of $p^{h}(x_{t-1}|x_{t})$ in Eq. 9, we can also sample $x_{t-1}$ as follows:
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\begin{array}{l} x _ {t - 1} \approx x _ {t - 1} ^ {\text {i n i t}} + \gamma \nabla_ {x _ {t - 1}} \log p ^ {h} \left(x _ {t - 1} \mid x _ {t}\right) + \sqrt {2 \gamma} z (16) \\ = \left(x _ {t - 1} ^ {\text {i n i t}} + \gamma \nabla_ {x _ {t - 1}} \log p (x _ {t - 1} | x _ {t}) + \sqrt {2 \gamma} z\right) \\ + \gamma \nabla_ {x _ {t - 1}} \log h (x _ {t - 1}, t - 1) (17) \\ \approx \underbrace {x _ {t - 1} ^ {\text {b a s e}}} _ {\text {r e c .}} + \gamma \underbrace {\nabla_ {x _ {t - 1}} \log h \left(x _ {t - 1} ^ {\text {b a s e}} , t - 1\right)} _ {\text {e d i t i n g}} (18) \\ \end{array}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
Here, $\gamma > 0$ is the step size. The gradient $\nabla_{x_{t-1}}\log p^h(x_{t-1}|x_t)$ does not involve $h(x_t,t)$ because it is constant w.r.t. $x_{t-1}$ . Both updates in Eqs. 15, 18 inherently fulfill two key image editing objectives - faithfulness and effectiveness - through their decomposition into a "reconstruction" term $x_{t-1}^{\mathrm{base}}$ and an "editing" term $\nabla_{x_t}\log h(x_t,t)$ or $\nabla_{x_{t-1}}\log h(x_{t-1}^{\mathrm{base}},t-1)$ , with $\eta$ or $\gamma$ serving as the trade-off coefficient. Eq. 15 is explicit while Eq. 18 is implicit. Furthermore, we can view Eq. 18 as a general optimization problem:
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
x _ {t - 1} = \underset {x _ {t - 1} ^ {\prime}} {\operatorname {a r g m a x}} \gamma \log h \left(x _ {t - 1} ^ {\prime}, t - 1\right) \tag {19}
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
with $x_{t-1}^{\mathrm{base}}$ being the initial value, and perform multiple gradient ascent updates to improve the editing quality:
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
x _ {t - 1} ^ {(0)} = x _ {t - 1} ^ {\text {b a s e}} \tag {20}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
x _ {t - 1} ^ {(k + 1)} = x _ {t - 1} ^ {(k)} + \gamma \nabla_ {x _ {t - 1}} \log h \left(x _ {t - 1} ^ {(k)}, t - 1\right) \tag {21}
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
Eq. 21 is indeed the $k$ -th iterations of the implicit update formula in Eq. 18.
|
| 191 |
+
|
| 192 |
+
We refer to our proposed editing method as $h$ -Edit with Eqs. 15 and 18 representing the explicit and implicit versions of $h$ -Edit, respectively. $h$ -Edit is highly flexible as it can incorporate arbitrary log $h$ -functions, provided their gradients w.r.t. noisy samples can be efficiently computed.
|
| 193 |
+
|
| 194 |
+
For text-guided editing with Stable Diffusion [55], an explicit $h$ -Edit update is given by:
|
| 195 |
+
|
| 196 |
+
$$
|
| 197 |
+
x _ {t - 1} ^ {\text {b a s e}} = \tilde {\mu} _ {\theta , \omega , t, t - 1} \left(x _ {t} ^ {\text {e d i t}}, c ^ {\text {o r i g}}\right) + u _ {t} ^ {\text {o r i g}} \tag {22}
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
x _ {t - 1} ^ {\text {e d i t}} = x _ {t - 1} ^ {\text {b a s e}} + \left(\sqrt {\sigma_ {t - 1} ^ {2} - \omega_ {t , t - 1} ^ {2}} - \frac {\sigma_ {t} a _ {t - 1}}{a _ {t}}\right) f \left(x _ {t} ^ {\text {e d i t}}, t\right) \tag {23}
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
where $\tilde{\mu}_{\theta,\omega,t,t-1}(\cdot,\cdot)$ and $u_t^{\mathrm{orig}}$ are defined in Eq. 4 and Eq. 6, respectively. $f(x_t,t)$ is expressed as follows:
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
\begin{array}{l} f \left(x _ {t}, t\right) = w ^ {\text {e d i t}} \epsilon_ {\theta} \left(x _ {t}, t, c ^ {\text {e d i t}}\right) - \hat {w} ^ {\text {o r i g}} \epsilon_ {\theta} \left(x _ {t}, t, c ^ {\text {o r i g}}\right) \\ + \left(\hat {w} ^ {\text {o r i g}} - w ^ {\text {e d i t}}\right) \epsilon_ {\theta} \left(x _ {t}, t, \varnothing\right) \tag {24} \\ \end{array}
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
Here, $w^{\mathrm{edit}}$ , $\hat{w}^{\mathrm{orig}}$ are guidance weights. $\hat{w}^{\mathrm{orig}}$ may differ from $w^{\mathrm{orig}}$ used during inversion. An one-step implicit $h$ -Edit update can be derived from Eq. 23 by replacing $f(x_{t}^{\mathrm{edit}}, t)$ with $f(x_{t-1}^{\mathrm{base}}, t-1)$ , which gives:
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
x _ {t - 1} ^ {\text {e d i t}} = x _ {t - 1} ^ {\text {b a s e}} + \left(\sqrt {\sigma_ {t - 1} ^ {2} - \omega_ {t , t - 1} ^ {2}} - \frac {\sigma_ {t} a _ {t - 1}}{a _ {t}}\right) f \left(x _ {t - 1} ^ {\text {b a s e}}, t - 1\right) \tag {25}
|
| 214 |
+
$$
|
| 215 |
+
|
| 216 |
+
A detailed derivation of Eqs. 22-25 is provided in Appdx. A.3. An overview of our method in comparison with Edit Friendly [24] and PnP Inversion [27] is shown in Fig. 2.
|
| 217 |
+
|
| 218 |
+
Next, we will delve into the design of $h$ and its score. We will focus on the implicit form and write $\nabla \log h(x_{t-1}, t-1)$ instead of $\nabla_{x_{t-1}} \log h(x_{t-1}, t-1)$ for simplicity.
|
| 219 |
+
|
| 220 |
+
# 3.3. Designing $h$ -Functions
|
| 221 |
+
|
| 222 |
+
# 3.3.1 $h$ -functions for conditional diffusion models
|
| 223 |
+
|
| 224 |
+
In most conditional diffusion models, $h(x_{t - 1}, t - 1) = p(y|x_{t - 1})$ where $y$ is a predefined condition. This means:
|
| 225 |
+
|
| 226 |
+
$$
|
| 227 |
+
\begin{array}{l} \nabla \log h (x _ {t - 1}, t - 1) \\ = \nabla \log p (y | x _ {t - 1}) (26) \\ = \nabla \log p (x _ {t - 1} | y) - \nabla \log p (x _ {t - 1}) (27) \\ \end{array}
|
| 228 |
+
$$
|
| 229 |
+
|
| 230 |
+
Eqs. 26 and 27 correspond to the classifier-based guidance and classifier-free guidance cases, respectively. For text-guided editing with SD, $\nabla \log p(x_{t - 1}|y)$ and $\nabla \log p(x_{t - 1})$ are modeled as $\frac{-\tilde{\epsilon}_{\theta}(x_{t - 1},t - 1,c^{\mathrm{edit}})}{\sigma_{t - 1}}$ and $\frac{-\tilde{\epsilon}_{\theta}(x_{t - 1},t - 1,c^{\mathrm{orig}})}{\sigma_{t - 1}}$ , respectively.
|
| 231 |
+
|
| 232 |
+
# 3.3.2 External reward models $h(x_0,0)$
|
| 233 |
+
|
| 234 |
+
In many practical editing scenarios, only external reward models on clean data $h(x_0, 0)$ are available. This means $h(x_t, t)$ cannot take $x_t$ as the direct input but must be computed through $h(x_0, 0)$ as $\mathbb{E}_{p(x_0 | x_t)}[h(x_0, 0)]$ . Since directly sampling from $p(x_0 | x_t)$ is difficult, existing works [2, 9, 79] usually approximate $h(x_t, t) = \mathbb{E}_{p(x_0 | x_t)}[h(x_0, 0)]$ by $h(x_{0|t}, 0)$ where $x_{0|t} \coloneqq \mathbb{E}_{p(x_0 | x_t)}[x_0]$ denotes the posterior estimation of $x_0$ given $x_t$ . In SD, $x_{0|t}$ can be derived from $x_t$ and $\tilde{\epsilon}_{\theta}(x_t, t, c^{\mathrm{orig}})$ as $\frac{x_t - \sigma_t \tilde{\epsilon}_{\theta}(x_t, t, c^{\mathrm{orig}})}{a_t}$ based on Tweedie's formula [16].
|
| 235 |
+
|
| 236 |
+
# 3.3.3 $h$ -functions for reconstruction
|
| 237 |
+
|
| 238 |
+
In addition to using $h$ as an editing function, we can design an $h$ -function specifically for reconstruction, defined as:
|
| 239 |
+
|
| 240 |
+
$$
|
| 241 |
+
h _ {\text {r e c}} \left(x _ {t - 1}, t - 1\right) := \exp \left(- \lambda_ {t - 1} \left\| x _ {t - 1} - x _ {t - 1} ^ {\text {b a s e}} \right\| _ {2} ^ {2}\right) \tag {28}
|
| 242 |
+
$$
|
| 243 |
+
|
| 244 |
+
When this $h$ -function is integrated into our optimization framework in Eq. 19, it enables simultaneous optimization-free and optimization-based reconstruction (via $x_{t-1}^{\mathrm{base}}$ and $\nabla \log h_{\mathrm{rec}}(x_{t-1}, t-1)$ , respectively), exclusive to $h$ -Edit.
|
| 245 |
+
|
| 246 |
+
# 3.3.4 Product of $h$ -Experts
|
| 247 |
+
|
| 248 |
+
Since $\log h$ can be interpreted as a negative energy function, we can combine multiple $h$ -functions to create a "product of $h$ -experts" as follows:
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
h = h _ {1} * h _ {2} * \dots * h _ {m} \tag {29}
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
where $m$ denotes the number of $h$ -functions. The combined $h$ -function in Eq. 29 can be easily integrated into our framework by summing the score for each component:
|
| 255 |
+
|
| 256 |
+
$$
|
| 257 |
+
\nabla \log h (x _ {t - 1}, t - 1) = \sum_ {i = 1} ^ {m} \nabla \log h _ {i} (x _ {t - 1}, t - 1) \tag {30}
|
| 258 |
+
$$
|
| 259 |
+
|
| 260 |
+
# 4. Related Work
|
| 261 |
+
|
| 262 |
+
Due to space constraints, this section only covers related work in training-free editing. For details on conditional generation and diffusion bridges, please refer to Appdx. C.
|
| 263 |
+
|
| 264 |
+
The advent of conditional diffusion models, particularly text-guided latent diffusion models like Stable Diffusion [55], has greatly advanced the development of various diffusion-based text-guided image editing techniques. These methods can be broadly categorized into training-based [31, 33, 35, 82] and training-free methods [38, 44, 46, 76, 77]. Unlike training-based methods, which finetune the noise network [33] or employ an auxiliary model [35] through additional training, training-free methods modify
|
| 265 |
+
|
| 266 |
+
<table><tr><td>Inv.</td><td>Attn.</td><td>Method</td><td>CLIP Sim.↑</td><td>Local CLIP↑</td><td>DINO Dist.×102↓</td><td>LPIPS×102↓</td><td>SSIM×10↑</td><td>PSNR↑</td></tr><tr><td rowspan="6">Deter.</td><td rowspan="6">P2P</td><td>NP</td><td>0.246</td><td>0.140</td><td>1.62</td><td>6.90</td><td>8.34</td><td>26.21</td></tr><tr><td>NT</td><td>0.248</td><td>0.130</td><td>1.34</td><td>6.07</td><td>8.41</td><td>27.03</td></tr><tr><td>StyleD</td><td>0.248</td><td>0.085</td><td>1.17</td><td>6.61</td><td>8.34</td><td>26.05</td></tr><tr><td>NMG</td><td>0.249</td><td>0.087</td><td>1.32</td><td>5.59</td><td>8.47</td><td>27.05</td></tr><tr><td>PnP Inv</td><td>0.250</td><td>0.095</td><td>1.17</td><td>5.46</td><td>8.48</td><td>27.22</td></tr><tr><td>h-Edit-D</td><td>0.253</td><td>0.147</td><td>1.17</td><td>4.85</td><td>8.54</td><td>27.87</td></tr><tr><td rowspan="5">Random</td><td rowspan="3">None</td><td>EF</td><td>0.254</td><td>0.122</td><td>1.29</td><td>6.09</td><td>8.37</td><td>25.87</td></tr><tr><td>LEDITS++</td><td>0.254</td><td>0.113</td><td>2.34</td><td>8.88</td><td>8.11</td><td>23.36</td></tr><tr><td>h-Edit-R</td><td>0.255</td><td>0.148</td><td>1.28</td><td>5.55</td><td>8.46</td><td>26.43</td></tr><tr><td rowspan="2">P2P</td><td>EF</td><td>0.255</td><td>0.126</td><td>1.51</td><td>5.70</td><td>8.40</td><td>26.30</td></tr><tr><td>h-Edit-R</td><td>0.256</td><td>0.159</td><td>1.45</td><td>5.08</td><td>8.50</td><td>26.97</td></tr></table>
|
| 267 |
+
|
| 268 |
+
Table 1. Text-guided image editing results of $h$ -Edit and other baselines. The best and second best results for each metric and inversion type are highlighted in bold and underscored, respectively.
|
| 269 |
+
|
| 270 |
+
the attention or feature maps in Stable Diffusion (SD) [6, 19, 50, 70] or adjust the generation process of SD [46] to ensure editing fidelity. Null-text inversion (NTI) [46] optimizes the null-text embedding during generation to minimize discrepancies between this process and the forward process. Prompt Tuning inversion (PTI) [14] interpolates between the target text embedding and the null-text embedding optimized by NTI to create a more suitable embedding for editing. EDICT [72] draws inspiration from affine coupling layers in normalizing flows to design a more faithful reconstruction process compared to DDIM sampling. Negative Prompt inversion (NPI) [45] bypasses the costly optimization of NTI by using the original text embedding instead of the null-text embedding, while ProxNPI [18] adds an auxiliary regularization term to enhance NPI's reconstruction capabilities. Noise Map Guidance (NMG) [7] leverages energy-based guidance [83] and information from the inversion process to denoise samples in a way that improve reconstruction. PnP Inversion [27] avoids optimization by incorporating the difference between inversion and reconstruction samples directly into the editing update. AIDI [48] views exact reconstruction as a fixed-point iteration problem and use Anderson acceleration to find the solution. Unlike these deterministic-inversion-based methods, Edit Friendly (EF) [24] employs random inversion with independent sampling of intermediate noisy samples, achieving good reconstruction without the need for attention map adjustments like P2P. LEDs++ [3] introduces several enhancements to EF, improving both efficiency and versatility in editing. Generally, most training-free methods are limited to text-guided editing, while our approach allows for the seamless combination of multiple editing types due to the clear separation of the reconstruction and editing terms.
|
| 271 |
+
|
| 272 |
+
# 5. Experiments
|
| 273 |
+
|
| 274 |
+
Due to space limit, we only provide main results in this section and refer readers to Appdx. F for our ablation studies on $w^{\mathrm{edit}}$ , $\hat{w}^{\mathrm{orig}}$ , the number of optimization steps, and other additional results. Our source code is available at
|
| 275 |
+
|
| 276 |
+
https://github.com/nektoan/h-edit.
|
| 277 |
+
|
| 278 |
+
# 5.1. Text-guided Editing
|
| 279 |
+
|
| 280 |
+
# 5.1.1 Experiment Setup
|
| 281 |
+
|
| 282 |
+
We evaluate our method on text-guided image editing using the PIE-Bench dataset [27], which includes 700 diverse images of humans, animals, and objects across various environments. Each image comes with an original and edited text descriptions and an annotated mask indicating the editing region. PIE-Bench features 10 distinct editing categories, including adding, removing, or modifying objects, styles, and backgrounds.
|
| 283 |
+
|
| 284 |
+
For evaluation, we follow [27] to use CLIP similarity [52] between the edited image and text to measure editing effectiveness. To assess editing faithfulness, we compute PSNR, LPIPS [81], and SSIM [73] on non-edited regions, as defined by the editing masks, and DINO feature distance [69] on the entire image. Additionally, we include local directional CLIP similarity [33] to enhance evaluation of editing effectiveness, as standard CLIP similarity may be insufficient when the edited attribute represents only a small part of the target text. While these metrics offer insights, they are imperfect, as analyzed in Appdx. G. Visual assessments remain essential for evaluating editing quality.
|
| 285 |
+
|
| 286 |
+
We compare $h$ -Edit with state-of-the-art diffusion-based text-guided editing baselines that use either deterministic or random inversion, including NT [46], NP [45], StyleD [38], NMG [7], PnP Inv [27], EF [24], and LEDITS++ [3]. We refer to $h$ -Edit with deterministic inversion as $h$ -Edit-D, and with random inversion as $h$ -Edit-R. For a fair comparison, we adhere to the default settings in [24, 27], using Stable Diffusion v1.4 [55] and 50 sampling steps for editing. Following [27], we apply Prompt-to-Prompt (P2P) [19] to all deterministic-inversion-based methods to ensure faithful reconstruction. For random-inversion-based methods, we report results both with and without P2P. Unless otherwise specified, we use the implicit form with a single optimization step (Eq. 18) for both $h$ -Edit-D and $h$ -Edit-R. The hyperparameters $w^{\mathrm{orig}}$ , $w^{\mathrm{edit}}$ , $\hat{w}^{\mathrm{orig}}$ are set to 1.0, 10.0, 9.0 for
|
| 287 |
+
|
| 288 |
+

|
| 289 |
+
Figure 3. Left: Visualization of swapped faces produced by implicit $h$ -Edit-R and baselines. (3s) denotes $h$ -Edit-R with 3 optimization steps. Identity similarity scores (higher is better) are displayed below each output. Right: Face swapping results of implicit $h$ -Edit-R and other baselines. †: The expression error for MegaFS was calculated on images with detectable faces, as required by the evaluation metric.
|
| 290 |
+
|
| 291 |
+
<table><tr><td>Method</td><td>ID↑</td><td>Expr.↓</td><td>Pose↓</td><td>LPIPS↓</td><td>FID↓</td></tr><tr><td>FaceShifter</td><td>0.70</td><td>2.39</td><td>2.81</td><td>0.08</td><td>10.16</td></tr><tr><td>MegaFS</td><td>0.34</td><td>2.88†</td><td>7.71</td><td>0.15</td><td>27.07</td></tr><tr><td>AFS</td><td>0.47</td><td>2.92</td><td>4.68</td><td>0.13</td><td>17.55</td></tr><tr><td>DiffFace</td><td>0.61</td><td>3.04</td><td>4.35</td><td>0.10</td><td>11.89</td></tr><tr><td>EF</td><td>0.74</td><td>3.10</td><td>4.12</td><td>0.06</td><td>20.78</td></tr><tr><td>h-edit-R</td><td>0.80</td><td>2.76</td><td>3.78</td><td>0.04</td><td>17.68</td></tr><tr><td>h-edit-R (3s)</td><td>0.84</td><td>3.10</td><td>4.29</td><td>0.05</td><td>19.12</td></tr></table>
|
| 292 |
+
|
| 293 |
+
$h$ -Edit-D, and 1.0, 7.5, 5.0 for $h$ -Edit-R, respectively, as these values yield strong quantitative and qualitative results. Detailed ablation studies on these hyperparameters are provided in Appdx. F.
|
| 294 |
+
|
| 295 |
+
# 5.1.2 Results
|
| 296 |
+
|
| 297 |
+
As shown in Table 1, $h$ -Edit-D + P2P significantly outperforms all deterministic-inversion-based baselines with P2P in both editing effectiveness and faithfulness. For example, our method improves over NT, a strong baseline, by $1.22 \times 10^{-2}$ in LPIPS and 0.017 in local CLIP similarity. We observed that PnP Inv and NMG often reconstruct the original image in challenging editing scenarios, achieving high faithfulness despite not actually making meaningful changes. In contrast, $h$ -Edit-D + P2P consistently performs successful edits while maintaining superior faithfulness. This validates the theoretical soundness of $h$ -Edit compared to other methods.
|
| 298 |
+
|
| 299 |
+
Similarly, $h$ -Edit-R outperforms both EF and LEDITS++ across all metrics, with or without P2P. This improvement is largely due to the implicit form and the carefully selected value of $\hat{w}^{\mathrm{orig}}$ - features unique to $h$ -Edit. Additionally, we observed that LEDITS++ occasionally produces unfaithful or erroneous images, even after hyperparameter tuning. Notably, random-inversion methods (including $h$ -Edit-R) without P2P often fall behind their P2P-enabled counterparts in changing color and texture but excel in adding and removing objects, suggesting that the choice to combine with P2P depends on the specific editing scenario.
|
| 300 |
+
|
| 301 |
+
In Fig. 1 and Appdx. E.1, we provide a non-exhaustive list of edited images by our method and baselines, showcasing our superior performance.
|
| 302 |
+
|
| 303 |
+
# 5.2. Face Swapping
|
| 304 |
+
|
| 305 |
+
# 5.2.1 Experimental Settings
|
| 306 |
+
|
| 307 |
+
We consider face swapping as a benchmark to verify the capabilities of $h$ -Edit in reward-model-based editing. Given a diffusion model trained on $256 \times 256$ CelebA-HQ facial
|
| 308 |
+
|
| 309 |
+
images [28, 44], and a pretrained ArcFace model [11], our goal is to transfer the identity from a reference face $x_0^{\mathrm{ref}}$ to an original face $x_0^{\mathrm{orig}}$ while preserving other attributes of $x_0^{\mathrm{orig}}$ such as hair style, pose, facial expression, and background. For this experiment, we use 5,000 pairs $\left(x_0^{\mathrm{orig}}, x_0^{\mathrm{ref}}\right)$ sampled randomly from CelebA-HQ.
|
| 310 |
+
|
| 311 |
+
We use implicit $h$ -Edit-R with either 1 or 3 optimization steps. Since P2P is inapplicable to unconditional diffusion models, our method operates without P2P. The cosine similarity between the edited image $x_0^{\mathrm{edit}}$ and $x_0^{\mathrm{ref}}$ is employed as the reward, and the score $\nabla \log h(x_{t-1}, t-1)$ is approximated based on the technique discussed in Section 3.3.2. We compare $h$ -Edit-R to well-known face-swapping methods, including GAN-based (FaceShifter [37]), Style-GAN-based (MegaFS [86] and AFS [71]), and diffusion-based (DiffFace [34]). Unlike DiffFace which is a training-based method, our method is training-free. We also include EF as a training-free baseline by adding the score to its editing term as described in Algo. B.2. This extension of EF has never been considered in the literature. We use 100 sampling steps for all diffusion-based methods, including DiffFace. Facial images generated by all methods are masked before evaluation, with unmasked results provided in Appdx. F.5. Following [37, 71], we assess editing effectiveness via cosine similarity using ArcFace, faithfulness via expression/pose error and LPIPS, and visual quality via FID [20].
|
| 312 |
+
|
| 313 |
+
# 5.2.2 Results
|
| 314 |
+
|
| 315 |
+
As shown in Fig. 3 (right), both versions of $h$ -Edit-R achieve the highest face-swapping accuracies. $h$ -Edit-R also ranks second-best in preserving expressions and poses, outperforming DiffFace and EF by large margins. However, in terms of FID, our method falls short of FaceShifter and DiffFace, likely because these methods are specifically tailored for face swapping and trained on larger face datasets (FFHQ [29] for DiffFace and FFHQ + CelebA-HQ for FaceShifter). Using three optimization steps improves the identity transfer accuracy compared to using one both
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
Figure 4. Qualitative comparison of $h$ -Edit-R + P2P and EF + P2P in the combined editing task. Style losses (lower is better) are shown below each output image. h-Edit-R + P2P achieves superior results in both style transfer and text-guided editing.
|
| 319 |
+
|
| 320 |
+
quantitatively and qualitatively (Fig. 3 (left)), showcasing the advantage of our implicit form. However, this improvement may slightly reduce faithfulness, especially when the source and reference faces differ significantly. Additional visualizations are provided in Appdx. E.2.
|
| 321 |
+
|
| 322 |
+
# 5.3. Combined Text-guided and Style Editing
|
| 323 |
+
|
| 324 |
+
# 5.3.1 Experimental Settings
|
| 325 |
+
|
| 326 |
+
This task is similar to text-guided editing in Section 5.1 but with an additional requirement: the edited image $x_0^{\mathrm{edit}}$ should have similar style as a reference image $x_0^{\mathrm{sty}}$ . Following [79], we use the negative L2 distance between the Gram matrices [26] from the third feature layer of the CLIP image encoder w.r.t. $x_0^{\mathrm{edit}}$ and $x_0^{\mathrm{sty}}$ as a style reward. The norm of the style reward score is scaled to match the norm of the editing function $f(\cdot)$ in Eq. 24 at each time $t$ , inspired by [79]. In this experiment, each original image $x_0^{\mathrm{orig}}$ from the PIE-Bench dataset is paired with a style image randomly selected from a set of 11 styles shown in Fig. 4. We employ implicit $h$ -Edit-R + P2P and compare it with EF + P2P. We keep $(w^{\mathrm{edit}}, \hat{w}^{\mathrm{orig}})$ for our method and $w^{\mathrm{edit}}$ for EF the same as in Section 5.1, tuning only the style editing coefficient $\rho^{\mathrm{sty}}$ . Given the limitations of existing metrics in evaluating stylized edited images, our choice of $\rho^{\mathrm{sty}}$ is based primarily on visual quality. We found that $\rho^{\mathrm{sty}}$ equal 0.6 and 1.5 provide the best results for our method and EF, respectively. Additional justification for this selection is provided in Appdx. E.3. All other settings remain consistent with those used in the text-guided editing experiment.
|
| 327 |
+
|
| 328 |
+
# 5.3.2 Results
|
| 329 |
+
|
| 330 |
+
It can be seen from Fig. 4 and the visualizations in Appdx. E.3 that $h$ -Edit-R + P2P achieves more effective text-guided and style edits while better preserving non-edited
|
| 331 |
+
|
| 332 |
+
content compared to EF + P2P. EF + P2P seems to struggle with combined editing task, sometimes introducing artifacts (e.g., a baby bear in the fourth column in Fig. 4) or altering non-edited content (e.g., a different girl in the third column). Additionally, EF + P2P is more sensitive to the change of $\rho^{\mathrm{sty}}$ as slightly increasing $\rho^{\mathrm{sty}}$ can improve style editing but also exacerbate the unfaithfulness problem (Appdx. E.3).
|
| 333 |
+
|
| 334 |
+
# 6. Conclusion
|
| 335 |
+
|
| 336 |
+
We introduced the reverse-time bridge modeling framework for effective diffusion-based image editing, and proposed $h$ -Edit - a novel training-free editing method - as an instance of our framework. $h$ -Edit leverages Doob's $h$ -transform and Langevin Monte Carlo to create an effective editing update, composed of the "reconstruction" and "editing" terms, which capture the editing faithfulness and effectiveness, respectively. This design grants our method great flexibility, allowing for seamless integration of various $h$ -functions to support different editing objectives. Extensive experiments across diverse editing tasks demonstrated that $h$ -Edit achieves state-of-the-art editing performance, as evidenced by quantitatively and qualitatively metrics. These results validate both the theoretical soundness and practical strength of our method, which we hope will inspire future research to address more complex real-world editing challenges while maintaining theoretical guarantees.
|
| 337 |
+
|
| 338 |
+
Despite these advantages, our method faces challenges in some difficult editing cases. Although these issues could be partially mitigated by using the implicit version with multiple optimization loops (Appdx. F.3) or by manually increasing $w^{\mathrm{edit}}$ and $\hat{w}^{\mathrm{orig}}$ (Appdx. F.1), an automated solution for handling them would be highly beneficial. Another promising direction is to modify $x_{t-1}^{\mathrm{base}}$ to focus on preserving only the non-edited regions, enhancing editing effectiveness.
|
| 339 |
+
|
| 340 |
+
# Acknowledgement
|
| 341 |
+
|
| 342 |
+
The experiments in this research were partially supported by AWS Cloud services under the AWS Cloud Credit for Research Program, for which Dr. Kien Do is the recipient.
|
| 343 |
+
|
| 344 |
+
# References
|
| 345 |
+
|
| 346 |
+
[1] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. 4
|
| 347 |
+
[2] Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal guidance for diffusion models. In ICLR, 2024. 5, 20
|
| 348 |
+
[3] Manuel Brack, Felix Friedrich, Katharia Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, and Apolinário Passos. Ledits++: Limitless image editing using text-to-image models. In CVPR, pages 8861-8870, 2024. 6, 20
|
| 349 |
+
[4] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In CVPR, pages 18392-18402, 2023. 19
|
| 350 |
+
[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS, pages 1877-1901. Curran Associates, Inc., 2020. 19
|
| 351 |
+
[6] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In ICCV, pages 22560-22570, 2023. 6, 24, 25
|
| 352 |
+
[7] Hansam Cho, Jonghyun Lee, Seoung Bum Kim, TaeHyun Oh, and Yonghyun Jeong. Noise map guidance: Inversion with spatial context for real image editing. In ICLR, 2024. 1, 6, 21
|
| 353 |
+
[8] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. In ICCV, pages 14367-14376, 2021. 1
|
| 354 |
+
[9] Hyungjin Chung, Jeongsol Kim, Michael T McCann, Marc L Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In ICLR. The International Conference on Learning Representations, 2023. 5, 20
|
| 355 |
+
|
| 356 |
+
[10] Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. NeurIPS, 34:17695-17709, 2021. 3, 20
|
| 357 |
+
[11] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, pages 4690-4699, 2019. 7
|
| 358 |
+
[12] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. NeurIPS, 34: 8780-8794, 2021. 1, 2, 4, 20
|
| 359 |
+
[13] Kien Do, Duc Kieu, Toan Nguyen, Dang Nguyen, Hung Le, Dung Nguyen, and Thin Nguyen. Variational flow models: Flowing in your style. arXiv preprint arXiv:2402.02977, 2024. 28
|
| 360 |
+
[14] Wenkai Dong, Song Xue, Xiaoyue Duan, and Shumin Han. Prompt tuning inversion for text-driven image editing using diffusion models. In ICCV, pages 7430-7440, 2023. 3, 6
|
| 361 |
+
[15] Joseph L Doob and JI Doob. Classical potential theory and its probabilistic counterpart. Springer, 1984. 2, 3, 20
|
| 362 |
+
[16] Bradley Efron. Tweedie's formula and selection bias. Journal of the American Statistical Association, 106 (496):1602-1614, 2011. 5, 20
|
| 363 |
+
[17] Rinon Gal, Or Patashnik, Haggai Maron, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Stylegan-nada: Clip-guided domain adaptation of image generators. ACM Transactions on Graphics (TOG), 41(4):1-13, 2022. 19
|
| 364 |
+
[18] Ligong Han, Song Wen, Qi Chen, Zhixing Zhang, Kunpeng Song, Mengwei Ren, Ruijiang Gao, Anastasis Stathopoulos, Xiaoxiao He, Yuxiao Chen, et al. Proxedit: Improving tuning-free real image editing with proximal guidance. In WACV, pages 4291-4301, 2024. 6
|
| 365 |
+
[19] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-or. Prompt-to-prompt image editing with cross-attention control. In ICLR, 2023. 1, 3, 6, 17, 18, 19, 24
|
| 366 |
+
[20] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. NIPS, 30, 2017. 7
|
| 367 |
+
[21] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 1, 2, 20
|
| 368 |
+
[22] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 33:6840-6851, 2020. 1, 2, 20
|
| 369 |
+
[23] Yi Huang, Jiancheng Huang, Yifan Liu, Mingfu Yan, Jiaxi Lv, Jianzhuang Liu, Wei Xiong, He Zhang,
|
| 370 |
+
|
| 371 |
+
Shifeng Chen, and Liangliang Cao. Diffusion model-based image editing: A survey. arXiv preprint arXiv:2402.17525, 2024. 1
|
| 372 |
+
[24] Inbar Huberman-Spiegelglas, Vladimir Kulikov, and Tomer Michaeli. An edit friendly ddpm noise space: Inversion and manipulations. In CVPR, pages 12469-12478, 2024. 1, 2, 3, 4, 5, 6, 19, 20
|
| 373 |
+
[25] Aapo Hyvarinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. 20
|
| 374 |
+
[26] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and superresolution. In ECCV, pages 694–711. Springer, 2016. 8
|
| 375 |
+
[27] Xuan Ju, Ailing Zeng, Yuxuan Bian, Shaoteng Liu, and Qiang Xu. Pnp inversion: Boosting diffusion-based editing with 3 lines of code. *ICLR*, 2024. 1, 2, 3, 4, 5, 6, 20, 21, 24
|
| 376 |
+
[28] Tero Karras. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. 7
|
| 377 |
+
[29] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, pages 4401-4410, 2019. 7
|
| 378 |
+
[30] Jack Karush. On the chapman-kolmogorov equation. The Annals of Mathematical Statistics, 32(4):1333-1337, 1961. 14
|
| 379 |
+
[31] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. In CVPR, pages 6007-6017, 2023. 5
|
| 380 |
+
[32] Duc Kieu, Kien Do, Toan Nguyen, Dang Nguyen, and Thin Nguyen. Bidirectional diffusion bridge models. arXiv preprint arXiv:2502.09655, 2025. 3, 14
|
| 381 |
+
[33] Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffusionclip: Text-guided diffusion models for robust image manipulation. In CVPR, pages 2426-2435, 2022. 5, 6, 19
|
| 382 |
+
[34] Kihong Kim, Yunho Kim, Seokju Cho, Junyoung Seo, Jisu Nam, Kychul Lee, Seungryong Kim, and KwangHee Lee. Difface: Diffusion-based face swapping with facial guidance. arXiv preprint arXiv:2212.13344, 2022.7
|
| 383 |
+
[35] Mingi Kwon, Jaeseok Jeong, and Youngjung Uh. Diffusion models already have a semantic latent space. In ICLR, 2023. 5, 19
|
| 384 |
+
[36] Bo Li, Kaitao Xue, Bin Liu, and Yu-Kun Lai. Bbdd: Image-to-image translation with brownian bridge diffusion models. In CVPR, pages 1952-1961, 2023. 3
|
| 385 |
+
[37] Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, and Fang Wen. Advancing high fidelity identity swapping for forgery detection. In CVPR, pages 5074-5083, 2020. 7, 28
|
| 386 |
+
|
| 387 |
+
[38] Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, and Jian Yang. *Stylediffusion: Prompt-embedding inversion for text-based editing.* arXiv preprint arXiv:2303.15649, 2023. 3, 5, 6, 21
|
| 388 |
+
[39] Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A Theodorou, Weili Nie, and Anima Anandkumar. I2sb: image-to-image schrödinger bridge. In ICML, pages 22042-22062, 2023. 3, 20
|
| 389 |
+
[40] Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. Compositional visual generation with composable diffusion models. In ECCV, pages 423-439. Springer, 2022. 20
|
| 390 |
+
[41] Xingchao Liu and Lemeng Wu. Learning diffusion bridges on constrained domains. In ICLR, 2023. 3, 20
|
| 391 |
+
[42] Xingchao Liu, Lemeng Wu, Mao Ye, and Qiang Liu. Let us build bridges: Understanding and extending diffusion generative models. arXiv preprint arXiv:2208.14699, 2022. 3
|
| 392 |
+
[43] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022. 28
|
| 393 |
+
[44] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In ICLR, 2022. 1, 5, 7
|
| 394 |
+
[45] Daiki Miyake, Akihiro Iohara, Yu Saito, and Toshiyuki Tanaka. Negative-prompt inversion: Fast image inversion for editing with text-guided diffusion models. arXiv preprint arXiv:2305.16807, 2023. 6, 21
|
| 395 |
+
[46] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In CVPR, pages 6038–6047, 2023. 1, 2, 3, 5, 6, 21
|
| 396 |
+
[47] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In ICML, pages 16784-16804. PMLR, 2022. 1
|
| 397 |
+
[48] Zhihong Pan, Riccardo Gherardi, Xiufeng Xie, and Stephen Huang. Effective real image editing with accelerated iterative diffusion inversion. In ICCV, pages 15912-15921, 2023. 6
|
| 398 |
+
[49] Omkar Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. In BMVC. British Machine Vision Association, 2015. 21
|
| 399 |
+
[50] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zeroshot image-to-image translation. In ACM SIGGRAPH, pages 1-11, 2023. 6
|
| 400 |
+
|
| 401 |
+
[51] Eckhard Platen Peter E. Kloeden. Numerical Solution of Stochastic Differential Equations. Springer-Verlag, 1992. 4
|
| 402 |
+
[52] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763. PMLR, 2021. 6
|
| 403 |
+
[53] Gareth O Roberts and Richard L Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4):341-363, 1996. 2, 4
|
| 404 |
+
[54] L Chris G Rogers and David Williams. Diffusions, Markov processes and martingales: Volume 2, Itô calculus. Cambridge university press, 2000. 2, 3
|
| 405 |
+
[55] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 1, 2, 4, 5, 6
|
| 406 |
+
[56] Chitwan Sahara, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 conference proceedings, pages 1-10, 2022. 1
|
| 407 |
+
[57] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo-Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. In NeurIPS, 2022. 1
|
| 408 |
+
[58] Simo Särkkä and Arno Solin. Applied stochastic differential equations. Cambridge University Press, 2019. 2, 3
|
| 409 |
+
[59] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, pages 815-823, 2015. 21
|
| 410 |
+
[60] Sefik Serengil and Alper Ozpinar. A benchmark of facial recognition pipelines and co-usability performances of modules. Journal of Information Technologies, 17(2):95-107, 2024. 21
|
| 411 |
+
[61] Sefik Ilkin Serengil and Alper Ozpinar. Lightface: A hybrid deep face recognition framework. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), pages 23-27. IEEE, 2020. 21
|
| 412 |
+
[62] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, pages 2256-2265. PMLR, 2015. 1, 2
|
| 413 |
+
[63] Vignesh Ram Somnath, Matteo Pariset, Ya-Ping Hsieh, Maria Rodriguez Martinez, Andreas Krause,
|
| 414 |
+
|
| 415 |
+
and Charlotte Bunne. Aligned diffusion schrödinger bridges. In UAI, pages 1985-1995. PMLR, 2023. 3, 20
|
| 416 |
+
[64] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In ICLR, 2021. 1, 2, 3
|
| 417 |
+
[65] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. NeurIPS, 32, 2019. 1, 2, 20
|
| 418 |
+
[66] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021. 4
|
| 419 |
+
[67] Alexander Y Tong, Nikolay Malkin, Kilian Fatras, Lazar Atanackovic, Yanlei Zhang, Guillaume Huguet, Guy Wolf, and Yoshua Bengio. Simulation-free schrödinger bridges via score and flow matching. In AISTATS, pages 1279-1287. PMLR, 2024. 3
|
| 420 |
+
[68] Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. ACM Transactions on Graphics (TOG), 40(4):1-14, 2021. 20
|
| 421 |
+
[69] Narek Tumanyan, Omer Bar-Tal, Shai Bagon, and Tali Dekel. Splicing vit features for semantic appearance transfer. In CVPR, pages 10748-10757, 2022. 6
|
| 422 |
+
[70] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In CVPR, pages 1921-1930, 2023. 1, 6, 24, 25
|
| 423 |
+
[71] Truong Vu, Kien Do, Khang Nguyen, and Khoat Than. Face swapping as a simple arithmetic operation. arXiv preprint arXiv:2211.10812, 2022. 7, 28
|
| 424 |
+
[72] Bram Wallace, Akash Gokul, and Nikhil Naik. Edict: Exact diffusion inversion via coupled transformations. In CVPR, pages 22532-22541, 2023. 6
|
| 425 |
+
[73] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6
|
| 426 |
+
[74] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, pages 681-688. CiteSeer, 2011. 2, 4
|
| 427 |
+
[75] Chen Henry Wu and Fernando De la Torre. A latent space of stochastic diffusion models for zero-shot image editing and guidance. In ICCV, pages 7378-7387, 2023. 3
|
| 428 |
+
[76] Qiucheng Wu, Yujuan Liu, Handong Zhao, Ajinkya Kale, Trung Bui, Tong Yu, Zhe Lin, Yang Zhang, and Shiyu Chang. Uncovering the disentanglement capability in text-to-image diffusion models. In CVPR, pages 1900-1910, 2023. 5
|
| 429 |
+
[77] Sihan Xu, Yidong Huang, Jiayi Pan, Ziqiao Ma, and Joyce Chai. Inversion-free image editing with
|
| 430 |
+
|
| 431 |
+
language-guided diffusion models. In CVPR, pages 9452-9461, 2024. 5
|
| 432 |
+
[78] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In ECCV, pages 325-341, 2018. 21
|
| 433 |
+
[79] Jiwen Yu, Yinhuai Wang, Chen Zhao, Bernard Ghanem, and Jian Zhang. Freedom: Training-free energy-guided conditional diffusion model. In ICCV, pages 23174-23184, 2023. 1, 5, 8, 19, 20, 21, 28
|
| 434 |
+
[80] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In ICCV, pages 3836-3847, 2023. 1
|
| 435 |
+
[81] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pages 586-595, 2018. 6
|
| 436 |
+
[82] Zhixing Zhang, Ligong Han, Arnab Ghosh, Dimitris N Metaxas, and Jian Ren. Sine: Single image editing with text-to-image diffusion models. In CVPR, pages 6027-6037, 2023. 5
|
| 437 |
+
[83] Min Zhao, Fan Bao, Chongxuan Li, and Jun Zhu. Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations. NeurIPS, 35:3609-3623, 2022. 6, 20
|
| 438 |
+
[84] Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, and Jiwen Lu. Unipc: A unified predictor-corrector framework for fast sampling of diffusion models. Advances in Neural Information Processing Systems, 36: 49842-49869, 2023. 28
|
| 439 |
+
[85] Linqi Zhou, Aaron Lou, Samar Khanna, and Stefano Ermon. Denoising diffusion bridge models. In ICLR, 2024. 3, 20
|
| 440 |
+
[86] Yuhao Zhu, Qi Li, Jian Wang, Cheng-Zhong Xu, and Zhenan Sun. One shot face swapping on megapixels. In CVPR, pages 4834-4844, 2021. 7, 28
|
CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7d490f8a6c31c15fb467b16752512525906c758a0b95dacc773aaa3d9dbf1a4e
|
| 3 |
+
size 687315
|
CVPR/2025/h-Edit_ Effective and Flexible Diffusion-Based Editing via Doob's h-Transform/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:19b5e21ab70ffb57b2aa8e5f4ffa234ad239d42d1dac183e7c2facb4ed156501
|
| 3 |
+
size 637516
|
CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/56588c3b-cd81-42c5-a2a1-c7cdd5a9d9cc_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d951aa30ff1aa30b98b73c0487368f6adb5eb4b0a875fe0b4cf5cbe4ea20189d
|
| 3 |
+
size 82363
|
CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/56588c3b-cd81-42c5-a2a1-c7cdd5a9d9cc_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e183b9129f951f6f1c2d7c1eda2bcde689dc56c2b67235137e87354777b0cd5f
|
| 3 |
+
size 105606
|
CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/56588c3b-cd81-42c5-a2a1-c7cdd5a9d9cc_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9a72284a42d9d6106948cbb4e15fe832e01ba6d07f18c0997337269c4c847fe7
|
| 3 |
+
size 2430530
|
CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/full.md
ADDED
|
@@ -0,0 +1,355 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# iG-6DoF: Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting
|
| 2 |
+
|
| 3 |
+
Tuo Cao $^{1}$ , Fei Luo $^{1}$ , Jiongming Qin $^{1}$ , Yu Jiang $^{1}$ , Yusen Wang $^{1}$ , and Chunxia Xiao $^{1*}$ \
|
| 4 |
+
$^{1}$ School of Computer Science, Wuhan University, Wuhan, Hubei, China\
|
| 5 |
+
{maplect,luofei, jiongming,jiangyul181, wangyusen, cxxiao}@whu.edu.cn\
|
| 6 |
+
http://graphvision.whu.edu.cn/
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
Traditional methods in pose estimation often rely on precise 3D models or additional data such as depth and normals, limiting their generalization, especially when objects undergo large translations or rotations. We propose iG-6DoF, a novel model-free 6D pose estimation method using iterative 3D Gaussian Splatting to estimate the pose of unseen objects. We first estimates an initial pose by leveraging multi-scale data augmentation and the rotation-equivariant features to create a better pose hypothesis from a set of candidates. Then, we propose an iterative 3DGS approach through iteratively rendering and comparing the rendered image with the input image to further progressively improve pose estimation accuracy. The proposed method consists of an object detector, a multi-scale rotation-equivariant feature based initial pose estimator, and a coarse-to-fine pose refiner. Such combination allows our method to focus on the target object in a complex scene dealing with large movement and weak textures. Our method achieves state-of-the-art results on the LINEMOD, OnePose-LowTexture, GenMOP datasets and our self-captured data, demonstrating its strong generalization to unseen objects and robustness across various scenes.
|
| 11 |
+
|
| 12 |
+
# 1. Introduction
|
| 13 |
+
|
| 14 |
+
Estimating the rotation and translation parameters of objects within images has been a longstanding and widely studied problem in computer vision. It has extensive applications in virtual reality, robotic manipulation, and autonomous driving. Early pose estimation methods [10, 11, 26, 48, 61, 62, 71] primarily focused on pose estimation at instance-level, requiring the target object to be included in the training set. They often lack generalization capabilities and hinder the estimation of unseen objects. Subsequently, researchers introduced category-level pose estima
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
Figure 1. Given a set of reference images and an input image, our method outputs the object mask, constructs a 3D Gaussian model, and estimates its 6D pose.
|
| 18 |
+
|
| 19 |
+
tion methods [15, 19, 63, 73, 77], which can estimate the pose parameters of objects within the same category, even if the specific instance is not present in the training set. They demonstrate a degree of generalization.
|
| 20 |
+
|
| 21 |
+
Recently, research has increasingly focused on generalizable pose estimation, aiming to develop a universal model to estimate an object's pose using only its CAD model or a few specific-view images [41]. Existing generalizable pose estimation methods can be primarily categorized into two types. The first type is CAD model-based. These methods [14, 21, 22, 36, 50] typically utilize the 3D or texture information of a precise CAD model as prior knowledge. They often employ feature-matching techniques to obtain 2D-3D correspondences between the query image and the CAD model. Then, they calculate pose parameters using traditional numerical algorithms such as PnP [20] or ICP [6]. The second type is model-free object pose estimation. These methods [8, 13, 25, 27, 42, 59] do not require precise CAD models but rely on a set of annotated reference images of the object. Multi-view stereo geometry provides geometric information about the object as prior knowledge. Compared to CAD-based methods, model-free methods offer greater potential for practical applications without the need to acquire accurate CAD models.
|
| 22 |
+
|
| 23 |
+
However, current model-free methods have certain limitations. For instance, FS6D [27] requires additional depth information for supervision, Gen6D [42] relies solely on 2D representations and struggles with large object movements and rotations. OnePose [59] necessitates establishing 2D-3D correspondences, which can lead to suboptimal performance in weak-texture regions. To address these issues, we propose a pose estimation network based on the multi-scale rotation-equivariant feature and the 3D Gaussian Splitting (3DGS). The core idea is to utilize multi-scale information to tackle challenges posed by large-scale movements and leverage the high-quality rendering capabilities of 3DGS to handle pose estimation for weak textures.
|
| 24 |
+
|
| 25 |
+
As illustrated in Figure 1, our method takes a set of reference images and an input image to output the object's mask, construct a 3D Gaussian model, and determine the object's 6D pose. Unlike traditional methods that match the query image to the closest reference image, which often results in inaccurate initial poses due to sparse reference data, our approach employs multi-scale data augmentation of reference images and builds a feature vector space on the icosahedral group to estimate the initial pose. Then, we refine this pose by iteratively searching the surrounding neighborhood, utilizing the high-quality rendering capabilities of 3DGS [33]. The key contributions of this work can be summarized as follows:
|
| 26 |
+
|
| 27 |
+
- We propose a novel end-to-end object pose estimation method that enables direct pose estimation of unseen objects without retraining.
|
| 28 |
+
- To enhance initialization accuracy, we introduce a multiscale icosahedral group feature matching module, improving initial pose estimation precision.
|
| 29 |
+
- Finally, we incorporate a 3DGS-based rendering-and-comparison module for fast and accurate iterative pose optimization.
|
| 30 |
+
|
| 31 |
+
# 2. Related works
|
| 32 |
+
|
| 33 |
+
# 2.1. Model-based Unseen Object Pose Estimation
|
| 34 |
+
|
| 35 |
+
CAD model-based methods incorporate detailed 3D object models as prior knowledge to accurately determine the position and orientation of previously unseen instances within a scene. Pitteri et al. pioneered using CAD models for 3DoF pose estimation by approximating object geometry with corner points [50]. However, this approach was limited to objects with distinct corners. To address this, they subsequently introduced an embedding method to capture local 3D geometry, enabling 2D-3D correspondence establishment and $\mathrm{PnP + RANSAC}$ -based pose estimation [49]. However, both methods were confined to estimating only three degrees of freedom.
|
| 36 |
+
|
| 37 |
+
Building upon point cloud registration techniques for unseen objects, Zhao et al. [75] introduced a geome
|
| 38 |
+
|
| 39 |
+
try correspondence-based approach using generic, object-agnostic features to establish robust 3D-3D correspondences. However, this method required external methods like Mask-RCNN [24] for object class and segmentation mask determination. To address this limitation, Chen et al. [14] presented ZeroPose, a framework for joint instance segmentation and pose estimation of unseen objects. Leveraging SAM [34], they generated object proposals and employed template matching for instance segmentation. A hierarchical geometric feature matching network based on GeoTransformer [53] was used to establish correspondences. Expanding on ZeroPose, Lin et al. [40] introduced a refined matching score considering semantics, appearance, and geometry for improved segmentation. For pose estimation, they developed a two-stage partial-to-partial point matching model to effectively construct dense 3D-3D correspondences. FoundPose [46] put forward a rapid template retrieval approach which founded on visual words created from DINOv2 [45] patch descriptors. As a result, it reduces the dependence on large amounts of data and boosts the matching speed. Freeze [12] represents the initial technique that harnesses the synergy between geometric and vision foundation models to estimate the pose of unseen objects.
|
| 40 |
+
|
| 41 |
+
# 2.2. Model-free Unseen Object Pose Estimation
|
| 42 |
+
|
| 43 |
+
In contrast to CAD model-based approaches, manual reference view-based methods bypass the need for object CAD models by relying on manually labeled reference images. These methods primarily establish correspondences between the query image and reference views, either in 3D-3D or 2D-3D space, to determine object pose. He et al. [27] introduced a pioneering few-shot 6DoF pose estimation method using a transformer-based dense RGBD prototype matching framework to correlate query and reference views without additional training. Corsetti et al. [32] employed textual prompts for object segmentation and reformulated the problem as relative pose estimation between scenes, solved through point cloud registration.
|
| 44 |
+
|
| 45 |
+
Sun et al. [59] adapted visual localization techniques for pose estimation by constructing a Structure from Motion (SfM) model of the unseen object using reference view RGB sequences. A graph attention network matched 2D query image keypoints with 3D points in the SfM model. However, this approach suffered from poor performance on low-textured objects due to reliance on repeatable keypoints. He et al. [25] addressed this limitation by introducing a keypoint-free SfM method to reconstruct semidense point cloud models of low-textured objects using the detector-free feature matching method LoFTR [58]. Recognizing the suboptimal performance of pre-trained feature matching models [54, 58] for pose estimation, Castro et al. [13] redesigned the training pipeline using a three-view system for one-shot object-to-image matching. In ad
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
Figure 2. Overview of iG-6DoF. Our method employs a coarse-to-fine approach, where the pose estimator first estimates an initial pose from the input image, and then the pose refiner is employed to achieve a precise final pose.
|
| 49 |
+
|
| 50 |
+
dition to this, FoundationPose [69] has constructed a unified framework for handling both model-based and model-free scenarios simultaneously.
|
| 51 |
+
|
| 52 |
+
# 2.3. Pose Estimation with Neural Rendering
|
| 53 |
+
|
| 54 |
+
Recently, some methods based on Neural Rendering (NeRF[44] and 3DGS[33]) have made significant strides in representing three-dimensional scenes [4, 5, 7, 23, 28, 52, 65, 66]. These methods train a neural network to minimize the errors between rendered images and real images, thereby modeling the color and volumetric density of a scene as a function of spatial position, thereby enabling high expressiveness for complex three-dimensional environments. Several efforts have applied this framework to tasks such as pose estimation and Simultaneous Localization and Mapping (SLAM) [16, 31, 36, 57, 76]. For instance, the iNeRF method [72] assumes a given camera pose, creates an image via the rendering process and proceeds to contrast the pixel variances with the query image. The gradient data obtained in this way is then applied to iteratively adjust the camera pose in a step-by-step manner until the rendered image matches the query image precisely. Similarly, Nerf-pose[38] makes use of NeRF's implicit portrayal of 3D scenes and trains a pose regression network so as to set up associations between 2D and 3D data. iComMa [60] inverts 3DGS to achieve accurate pose estimation without training, using a gradient-based framework and an end-to-end matching module to improve robustness and precision under difficult conditions. Although these methods can achieve an accurate pose estimation using pixel-level comparison losses, they encounter difficulties in achieving effective convergence in complex situations. Specifically, when there is a substantial disparity between the rendered
|
| 55 |
+
|
| 56 |
+
images and the query images, it becomes a bottleneck for precise pose estimation.
|
| 57 |
+
|
| 58 |
+
# 3. Method
|
| 59 |
+
|
| 60 |
+
Given a set of reference images of an object with known camera poses and intrinsics, our goal is to estimate the 6D pose (translation $\mathbf{T} = (t_x,t_y,t_z)\in \mathbb{R}^3$ and rotation $R\in \mathbb{SO}^3$ ) of the same object in a query image. The pose transformation maps points from the object coordinate system to the camera coordinate system using the equation $P_{cam} = \mathbf{R}P_{obj} + \mathbf{T}$ .
|
| 61 |
+
|
| 62 |
+
As illustrated in Figure 2, iG-6DoF comprises three primary modules: an object detector, an initial pose estimator, and a pose refiner. The object detector segments the object region within the image (Section 3.2). Subsequently, the initial pose estimator determines an initial rotation and translation by identifying the most similar feature within a multi-scale SO(3) group feature space (Section 3.3). Upon the initial translation and rotation, the 3DGS pose refiner computes a precise pose estimate (Section 3.4).
|
| 63 |
+
|
| 64 |
+
# 3.1. Preliminaries
|
| 65 |
+
|
| 66 |
+
Data Acquisition. To implement our method, we require a set of reference images with parameters $\{I_i^{ref}, R_i^{ref}, T_i^{ref}\}_{i=1}^{N_r}$ , where $I$ , $R$ , and $T$ represent the image and its corresponding camera extrinsics. $N_r$ is the number of reference images. Owing to off-the-shelf toolboxes provided by OnePose [59] and ARKit [3], we can easily manually annotate the 3D bounding box of an object in a video sequence and obtain camera parameters.
|
| 67 |
+
|
| 68 |
+
3D Gaussian Splatting. 3DGS is a recent and innovative technique for representing and rendering 3D scenes. They
|
| 69 |
+
|
| 70 |
+

|
| 71 |
+
Figure 3. Detector architecture: We use the features from reference images as kernels to convolve with query image features, generating heap maps. This heap maps are then processed by a CNN to produce a object mask.
|
| 72 |
+
|
| 73 |
+
first recover camera poses and sparse 3D point clouds of the scene from a sequence of captured images using Structure from Motion (SfM), and then construct 3D Gaussian spheres based on these point clouds. Each 3D Gaussian is parameterized by a 3D coordinate $\mu \in \mathbb{R}^3$ , a 3D rotation quaternion $r \in \mathbb{R}^4$ , a scale vector $s \in \mathbb{R}^3$ , an opacity factor $\alpha \in \mathbb{R}$ , and spherical harmonic coefficients $h \in \mathbb{R}^k$ , where $k$ denotes the number of degrees of freedom. Finally, We can calculating the loss between the rendered image and the real image, and using the backpropagation algorithm to optimize the Gaussian parameters.
|
| 74 |
+
|
| 75 |
+
# 3.2. Object Detector
|
| 76 |
+
|
| 77 |
+
Our detector builds on the TGID [2] and Gen6D [42] frameworks, which apply a correlation-based object detector. Since we need to construct a 3DGS model of the object, a more precise object mask is required, so we replace the output bounding box with a segmentation mask. Specifically, we set a per-pixel confidence score, and pixels are considered part of the target object when their confidence exceeds a certain threshold. The core idea is to use TDID embeddings to convolve the feature map of the reference image over the query image features, calculating the correlation for each pixel. A threshold is then set to identify high-confidence pixels as belonging to the target object, resulting in the object's mask.
|
| 78 |
+
|
| 79 |
+
As shown in Figure 3, our detector architecture employs a shared feature extractor, like VGG-11 [56], to extract features from the target and scene images. These features are subsequently combined in a joint embedding layer. Finally, a set of convolutions predicts class scores and segmentation mask regression parameters for a set of default anchor boxes on the embedding feature map.
|
| 80 |
+
|
| 81 |
+
# 3.3. Initial Pose Estimator
|
| 82 |
+
|
| 83 |
+
The primary objective of the initial pose estimator is to select the most accurate pose hypothesis from a set of candidates. Previous methods often relied on template matching, where the closest match to the query image is selected from a reference image database. However, due to the sparsity of viewpoints in the reference image set, this approach can
|
| 84 |
+
|
| 85 |
+
lead to significant errors, particularly when the query image's viewpoint differs substantially from those in the reference set.
|
| 86 |
+
|
| 87 |
+
As shown in Figure 4, we first apply multi-scale data augmentation to the reference images to enrich the candidate pose database. Specifically, each reference image is rotated $k\pi /2$ clockwise and scaled by factors of 2 and 0.5, respectively. Inspired by RoReg [64] and GIFT [43], we utilize rotation-equivariant feature to embed the reference images. Specifically, we treat the RGB color values as 3D coordinates in a three-dimensional space, establishing a mapping from the color space to the 3D space, so that we can apply point set feature extractor PointNet [51] as backbone to extract 3D feature from 2D image. To prevent the same color at different positions from being mapped to a single 3D point, we added positional encoding [44]. Subsequently, we define a neighborhood space on the 2D image and employ a icosahedral group feature encoder to encode the reference images, yielding a multi-scale group feature space $\{V_i^{ref}\}_{i = 1}^{N_r}\in \mathbb{R}^{60\times N_r}$ . In a similar manner, a feature vector $V^{que}\in \mathbb{R}^{60}$ is extracted from the query image. To obtain the initial pose parameters, we compute the cosine similarity between $V^{que}$ and each reference feature vector $V_{i}^{ref}$ . The reference vector with the highest similarity score is selected, and its associated pose parameters are assigned as the initial estimate.
|
| 88 |
+
|
| 89 |
+
Group Feature Space. Given a target image that has been segmented using a mask, we employ our proposed method to project each pixel within the segmentation mask onto a corresponding 3D point in space, resulting in a set of 3D points denoted as $\{P_i \in \mathbb{R}^3\}$ . To establish local neighborhoods for each pixel, we define $N_P = \{p_i | \| p_i - p \| < 5\}$ , where $N_P$ represents the neighborhood of pixel $p$ , and $p_i$ denotes the position of a neighboring pixel located within a 5-pixel radius of $p$ .
|
| 90 |
+
|
| 91 |
+
Given an input neighborhood point set $N_P$ , we apply an element $g$ of the icosahedral group $G$ to generate rotated point sets. Each rotated point set is processed by a shared point set feature extractor, denoted as $\phi$ , to produce an n-dimensional feature vector, expressed as:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
f _ {0} (g) = \phi \left(T _ {g} \circ N _ {P}\right), \tag {1}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $f_0: G \to \mathbb{R}^{n_0}$ represents the output group feature for point $p$ , and $T_g \circ N_P$ denotes the application of rotation $g$ to the point set $N_P$ . Since the icosahedral group $G$ comprises 60 rotations, the group feature $f_0$ can be efficiently stored as a $60 \times n_0$ matrix. We apply PointNet [51] as backbone $\phi$ . Then, we adopt a localized icosahedral group convolution for feature embedding:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
[ f _ {l + 1} (g) ] _ {j} = \sum_ {i} ^ {1 3} w _ {j, i} ^ {T} f _ {l} \left(h _ {i} g\right) + b _ {j}, \tag {2}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 4. Architecture of the pose estimator. We first apply multi-scale image augmentations to the reference images, including rotations and scaling. Subsequently, we extract rotation-equivariant features using the icosahedral group. Finally, the optimal initial pose is determined by comparing the similarity of the feature vectors.
|
| 105 |
+
|
| 106 |
+
where $l$ denote the layer index, $f_{l}(g)\in \mathbb{R}^{n_{l}}$ and $f_{l + 1}(g)\in$ $\mathbb{R}^{n_{l + 1}}$ represent the input and output feature vectors, respectively. $[\cdot ]_j$ extracts the $j$ -th element from a vector. The neighborhood set $h_i\in H$ is denoted by where each is an element of the group $G$ . The trainable weight associated with the $i$ -th neighbor and $j$ -th output feature is represented by $w_{j,i}\in \mathbb{R}^{n_k}$ , with being the corresponding bias $b_{j}$ . Note that $j$ ranges from 1 to $n$ , indexing the output feature dimensions. Given the group's closure property, the composition $h_{i}g$ is also an element of $G$ .
|
| 107 |
+
|
| 108 |
+
# 3.4. Pose Refiner
|
| 109 |
+
|
| 110 |
+
The pose refiner aims to refine an initial pose $\mathcal{T}_{\mathrm{init}}$ with an input image. To achieve this, we leverage the high rendering quality of 3DGS [33]. By iteratively rendering and comparing the rendered image with the input image, we progressively update the pose estimate until convergence. As shown in Figure 5, the refiner takes as input $\mathcal{T}_{\mathrm{init}}^k$ and a 3DGS model and predicts an updated pose $\mathcal{T}_{\mathrm{init}}^{k + 1} = \mathcal{T}_{\Delta}^{k + 1}\mathcal{T}_{\mathrm{init}}^k$ and a rendered images $I_{\text{render}}^{k + 1}$ . We iteratively refine the pose parameters by minimizing the SSIM loss between the rendered and input images $I_{\text{que}}$ . Similar to [35, 36, 39], we decompose $\mathcal{T}_{\Delta}^{k + 1}$ into its rotational component $R_{\Delta}^{k + 1}$ and translational component $T_{\Delta}^{k + 1}$ (Note that $\mathcal{T} \in \mathbb{S}\mathbb{E}(4)$ and $T \in \mathbb{R}^3$ ). To decouple the rotation and translation components, the rotation center is shifted from the camera origin to the object's center, as determined by the current pose estimate. This modification ensures that applying a rotation does not alter the object's position within the camera frame. The iterative optimization process of the refiner is as follows:
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\begin{array}{l} \mathcal {T} _ {\Delta} ^ {k + 1} = \arg \min _ {T _ {\Delta} ^ {k + 1}} \mathcal {L} _ {T} (\mathcal {R} _ {g s} (T _ {\Delta} ^ {k + 1} + \mathcal {T} ^ {k}, G S M), I _ {q u e}) \\ + \arg \min _ {R _ {\Delta} ^ {k + 1}} \mathcal {L} _ {R} \left(\mathcal {R} _ {g s} \left(R _ {\Delta} ^ {k + 1} \odot \left(T _ {\Delta} ^ {k + 1} + \mathcal {T} ^ {k}\right), G S M\right), I _ {q u e}\right), \tag {3} \\ \end{array}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
Figure 5. Diagram of pose refiner. Given the pose from the previous time step $\mathcal{T}_{\mathrm{init}}^k$ , we decouple $\mathcal{T}_{\Delta}^{k+1}$ into $R_{\Delta}^{k+1}$ and $T_{\Delta}^{k+1}$ for separate estimation. We first estimate the translation vector, followed by the rotation vector. This process is iterated until reaching the specified number of steps or convergence.
|
| 118 |
+
|
| 119 |
+
where $\mathcal{R}_{gs}$ denotes the 3D gaussian renderer, $\odot$ signifies the application of a rigid rotation and GSM is a 3DGS model.
|
| 120 |
+
|
| 121 |
+
# 3.5. Loss Functions
|
| 122 |
+
|
| 123 |
+
We use the widely adopted Binary Cross Entropy (BCE) loss to train our detector for pixel-wise segmentation, denoted as $\mathcal{L}_{det}$ :
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathcal {L} _ {\text {d e t}} = \mathcal {L} _ {B C E} (M, \bar {M}), \tag {4}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $M$ and $\bar{M}$ represent the predicted and ground truth segmentation masks, respectively.
|
| 130 |
+
|
| 131 |
+
We apply the descriptor construction loss from RoReg [64] to train pose estimator. Given a batch of ground-truth image pairs $(I_q, I_r)$ and their corresponding ground-truth rotations $R_{I_q}$ , we compute the outputs of the group feature embedder, which include the rotation-invariant descriptors $(d_{I_q}, d_{I_r}^+)$ , the rotation-equivariant group features $(f_{I_q}, f_{I_r}^+)$ , and the corresponding ground truth coarse rotations $g_{I_r}^+$ . For every sample in the batch, we compute the loss:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathcal {L} _ {1} (d, d ^ {+}, D ^ {-}) = \frac {e ^ {| | d - d ^ {+} | | _ {2}} - \min _ {d ^ {-} \in D ^ {-}} e ^ {| | d - d ^ {-} | | _ {2}}}{e ^ {| | d - d ^ {+} | | _ {2}} + \sum_ {d ^ {-} \in D ^ {-}} e ^ {| | d - d ^ {-} | | _ {2}}} \tag {5}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathcal {L} _ {2} (f, f ^ {+}, g ^ {+}) = - \log \left(\frac {e ^ {\langle f , P _ {g} + \circ f ^ {+} \rangle}}{\sum_ {g \in G} e ^ {\langle f , P _ {g} \circ f ^ {+} \rangle}}\right) \tag {6}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathcal {L} _ {\text {g r o u p}} = \lambda * \mathcal {L} _ {1} (d, d ^ {+}, D ^ {-}) + \mathcal {L} _ {2} (f, f ^ {+}, g ^ {+}), \tag {7}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where the subscript $I_r$ is omitted for simplicity. Equation 5 supervises the rotation-invariant descriptor, where $d$ is the descriptor, $d^+$ is the matched descriptor, $D^-$ are the negative descriptors in the batch, and $|\cdot|_2$ is the L2 norm.
|
| 146 |
+
|
| 147 |
+
Finally, based on the aforementioned $L_{\text{pose}}$ defined as
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\mathcal {L} _ {\text {p o s e}} = \mathcal {L} _ {R} + \mathcal {L} _ {T} \tag {8}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\mathcal {L} _ {T} = \mathcal {L} _ {S S I M}, \tag {9}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\mathcal {L} _ {R} = \mathcal {L} _ {S S I M} + \mathcal {L} _ {M S - S S I M}, \tag {10}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
where $\mathcal{L}_{SSIM}$ and $\mathcal{L}_{MS-SSIM}$ represent the SSIM-based [68] and multi-scale SSIM-based [67] loss functions, respectively. The overall loss function of our method is:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\mathcal {L} _ {\text {t o t a l}} = \lambda_ {1} \mathcal {L} _ {\text {d e t}} + \lambda_ {2} \mathcal {L} _ {\text {g r o u p}} + \lambda_ {3} \mathcal {L} _ {\text {p o s e}}, \tag {11}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where $\lambda_{\{1,2,3\}}$ represent the hyperparameters, which we set to 0.3, 0.2, and 0.5, respectively.
|
| 168 |
+
|
| 169 |
+
# 4. Experiments
|
| 170 |
+
|
| 171 |
+
Training Data. We employ the synthetic MegaPose dataset [36] for training, which generated using BlenderProc [17] with 1,000 diverse objects from the Google Scanned Objects dataset [18], comprising one million synthetic RGB images.
|
| 172 |
+
|
| 173 |
+
Evaluation data. We evaluate our proposed model on three widely used benchmarks: LINEMOD, OnePose-LowTexture, and GenMOP, to demonstrate its generalization ability across diverse object categories and scenes. The LINEMOD dataset [29], comprising 13 objects, is a commonly employed benchmark for 6D object pose estimation. Adhering to the established protocol [25, 37, 42, 47, 59], the training partition of LINEMOD is designated as reference data, while the testing partition serves as the evaluation set. The OnePose-LowTexture dataset [59] presents a challenging scenario with objects exhibiting minimal or absent texture, containing eight scanned objects for evaluation. The GenMOP [42] dataset comprises ten distinct objects. For each object, two video sequences were captured under varying environmental conditions, including background and lighting variations. Each video sequence is segmented into approximately 200 individual images
|
| 174 |
+
|
| 175 |
+
Metrics. To evaluate our model, we employ the commonly used Average Distance (ADD) metric [29] and projection error. For ADD, we calculate both the recall rate at $10\%$ of the object diameter (ADD-0.1d) and the Area Under the Curve (AUC) within a $10\mathrm{cm}$ radius (ADD-AUC). Regarding projection error, we compute the recall rate at a pixel threshold of 5 (Prj-5).
|
| 176 |
+
|
| 177 |
+
Setup. We primarily compare iG-6DoF against Gen6D [42], Cas6D [47], Onepose [59], GS-Pose [8] and MFOS [37]. To ensure a fair comparison and demonstrate the effectiveness of each module, we evaluated our initial pose estimator and pose refiner on the aforementioned three separate datasets.
|
| 178 |
+
|
| 179 |
+
# 4.1. Results on LINEMOD
|
| 180 |
+
|
| 181 |
+
We first evaluate iG-6DoF on a subset of LINEMOD objects against OSOP [55], Gen6D [42], Cas6D [47], GS-Pose [8] and LocPoseNet [74] and present quantitative results in Table 1. Without pose refinement, iG-6DoF achieves an ADD(S)-0.1d of $45.99\%$ . After refinement, performance improves to $83.22\%$ .
|
| 182 |
+
|
| 183 |
+
Then, we compare our method against state-of-the-art one-shot approaches, including Gen6D [42], OnePose [59], OnePose++ [25] and MFOS [37], using ADD(S)-0.1d and Proj2D metrics. As indicated in Table 2, our method consistently outperforms these baselines. Notably, unlike OnePose and OnePose++ which rely on pre-reconstructed 3D shape models, our approach operates without requiring prior 3D object knowledge. This leads to improvements of $8.2\%$ and $2.3\%$ on ADD-S and Proj2D, respectively, over the strongest baseline.
|
| 184 |
+
|
| 185 |
+
# 4.2. Results on OnePose-LowTexture
|
| 186 |
+
|
| 187 |
+
We then evaluate iG-6DoF on the challenging OnePose-LowTexture dataset [25], comparing it against state-of-the-art baselines including OnePose [59], OnePose++ [25], Gen6D [42], and the instance-specific PVNet [48]. Table 3 presents quantitative standard cm-degree accuracy for different thresholds, demonstrating the superior performance of iG-6DoF. Specially, our method outperforms all baseline methods at the threshold $1\mathrm{cm} / 1\mathrm{deg}$ and $5\mathrm{cm} / 5\mathrm{deg}$ . OnePose++ eliminates reliance on local feature matching by adopting the keypoint-free LoFTR [58], improving performance Onepose to $72.1\%$ , yet still falls short of iG-6DoF despite requiring ground-truth bounding boxes.
|
| 188 |
+
|
| 189 |
+
# 4.3. Results on GenMOP
|
| 190 |
+
|
| 191 |
+
We finally compare iG-6DoF with generalizable image-matching based ObjDesc [1], two instance-specific estimators PVNet [48] and RLLG [9] and model-free method Gen6D [42] on GenMOP dataset. To ensure a fair comparison, we adopt the same experimental setup as Gen6D, using the original reference images without data augmentation. All testing data is unseen during the training of iG-6DoF, Gen6D, and ObjDesc. For PVNet and RLLG, we train a separate model for each object. Quantitative results are shown in Table 5, our method essentially achieves the current state-of-the-art performance.
|
| 192 |
+
|
| 193 |
+
# 4.4. Ablation Study
|
| 194 |
+
|
| 195 |
+
To verify the effectiveness of each module in our proposed method, we conducted ablation studies on the widely used LM [29] dataset. Performance is assessed using the BOP [30] metric.
|
| 196 |
+
|
| 197 |
+
Ablation study on the pose estimator. To demonstrate the designs in the initial pose estimator, we conduct ablation studies on the LM dataset and results are shown in Table 4
|
| 198 |
+
|
| 199 |
+
<table><tr><td>Method</td><td>Pose Refiner</td><td>cat</td><td>duck</td><td>bvise</td><td>cam</td><td>driller</td><td>Avg.</td></tr><tr><td>OSOP [55]</td><td></td><td>34.43</td><td>20.08</td><td>50.41</td><td>32.30</td><td>43.94</td><td>36.23</td></tr><tr><td>Gen6D [42]</td><td></td><td>15.97</td><td>7.89</td><td>25.48</td><td>22.06</td><td>17.24</td><td>17.73</td></tr><tr><td>LocPoseNet [74]</td><td>w/o</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>27.27</td></tr><tr><td>GS-Pose [8]</td><td></td><td>47.80</td><td>30.70</td><td>63.47</td><td>44.61</td><td>47.27</td><td>46.77</td></tr><tr><td>iG-6DoF (Ours)</td><td></td><td>46.53</td><td>31.61</td><td>61.97</td><td>41.55</td><td>48.31</td><td>45.99</td></tr><tr><td>OSOP [55]</td><td rowspan="4">w/</td><td>42.54</td><td>22.16</td><td>55.59</td><td>36.21</td><td>49.57</td><td>42.21</td></tr><tr><td>Gen6D [42]</td><td>60.68</td><td>40.47</td><td>77.03</td><td>66.67</td><td>67.39</td><td>62.45</td></tr><tr><td>Cas6D [47]</td><td>60.58</td><td>51.27</td><td>86.72</td><td>70.10</td><td>84.84</td><td>70.72</td></tr><tr><td>iG-6DoF (Ours)</td><td>80.89</td><td>66.39</td><td>95.88</td><td>87.23</td><td>85.69</td><td>83.22</td></tr></table>
|
| 200 |
+
|
| 201 |
+
Table 1. Quantitative results on a subset of objects from the LINEMOD dataset [29] in terms of ADD(S)-0.1d. The best performance is highlighted in bold.
|
| 202 |
+
|
| 203 |
+
<table><tr><td rowspan="2">Method</td><td colspan="14">Object Name</td><td>Avg.</td></tr><tr><td>ape</td><td>benchwise</td><td>cam</td><td>can</td><td>cat</td><td>driller</td><td>duck</td><td>eggbox*</td><td>glue*</td><td>holepuncher</td><td>iron</td><td>lamp</td><td>phone</td><td></td><td></td></tr><tr><td></td><td colspan="13">ADD(S)-0.1d</td><td></td><td></td></tr><tr><td>Gen6D</td><td>-</td><td>62.1</td><td>45.6</td><td>-</td><td>40.9</td><td>48.8</td><td>16.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>OnePose</td><td>11.8</td><td>92.6</td><td>88.1</td><td>77.2</td><td>47.9</td><td>74.5</td><td>34.2</td><td>71.3</td><td>37.5</td><td>54.9</td><td>89.2</td><td>87.6</td><td>60.6</td><td>63.6</td><td></td></tr><tr><td>OnePose++</td><td>31.2</td><td>97.3</td><td>88.0</td><td>89.8</td><td>70.4</td><td>92.5</td><td>42.3</td><td>99.7</td><td>48.0</td><td>69.7</td><td>97.4</td><td>97.8</td><td>76.0</td><td>76.9</td><td></td></tr><tr><td>MFOS</td><td>47.2</td><td>73.5</td><td>87.5</td><td>85.4</td><td>80.2</td><td>92.4</td><td>60.8</td><td>99.6</td><td>69.7</td><td>93.5</td><td>82.4</td><td>95.8</td><td>51.6</td><td>78.4</td><td></td></tr><tr><td>Ours</td><td>64.3</td><td>96.3</td><td>88.6</td><td>92.1</td><td>83.2</td><td>88.6</td><td>73.3</td><td>99.6</td><td>81.3</td><td>94.3</td><td>81.3</td><td>88.6</td><td>73.1</td><td>85.1</td><td></td></tr><tr><td></td><td colspan="13">Proj2D</td><td></td><td></td></tr><tr><td>OnePose</td><td>35.2</td><td>94.4</td><td>96.8</td><td>87.4</td><td>77.2</td><td>76.0</td><td>73.0</td><td>89.9</td><td>55.1</td><td>79.1</td><td>92.4</td><td>88.9</td><td>69.4</td><td>78.1</td><td></td></tr><tr><td>OnePose++</td><td>97.3</td><td>99.6</td><td>99.6</td><td>99.2</td><td>98.7</td><td>93.1</td><td>97.7</td><td>98.7</td><td>51.8</td><td>98.6</td><td>98.9</td><td>98.8</td><td>94.5</td><td>94.3</td><td></td></tr><tr><td>MFOS</td><td>97.1</td><td>94.1</td><td>98.4</td><td>98.2</td><td>98.4</td><td>95.7</td><td>96.3</td><td>99.0</td><td>94.8</td><td>99.3</td><td>94.6</td><td>94.2</td><td>88.9</td><td>96.1</td><td></td></tr><tr><td>Ours</td><td>97.8</td><td>99.2</td><td>97.8</td><td>98.2</td><td>99.1</td><td>91.5</td><td>97.6</td><td>99.3</td><td>95.1</td><td>98.9</td><td>95.2</td><td>95.6</td><td>90.3</td><td>96.6</td><td></td></tr></table>
|
| 204 |
+
|
| 205 |
+
Table 2. Results on LINEMOD and comparison with other model-free baselines. Symmetric objects are indicated by $^*$ . The best performance is highlighted in bold, while the second best results are underlined.
|
| 206 |
+
|
| 207 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">GT-Mask</td><td colspan="3">OnePose-LowTexture</td></tr><tr><td>1cm-1deg</td><td>3cm-3deg</td><td>5cm-5deg</td></tr><tr><td>HLoc (SPP + SPG)</td><td>✓</td><td>13.8</td><td>36.1</td><td>42.2</td></tr><tr><td>HLoc (LoFTR*)</td><td>✓</td><td>13.2</td><td>41.3</td><td>52.3</td></tr><tr><td>PVNet</td><td>✓</td><td>15.1</td><td>33.2</td><td>48.6</td></tr><tr><td>Gen6D</td><td>X</td><td>11.5</td><td>31.6</td><td>25.9</td></tr><tr><td>OnePose</td><td>✓</td><td>12.4</td><td>35.7</td><td>45.4</td></tr><tr><td>OnePose++</td><td>✓</td><td>16.8</td><td>57.7</td><td>72.1</td></tr><tr><td>MFOS</td><td>✓</td><td>14.1</td><td>54.3</td><td>74.2</td></tr><tr><td>Ours</td><td>X</td><td>16.6</td><td>53.2</td><td>73.5</td></tr><tr><td>Ours</td><td>✓</td><td>17.2</td><td>55.6</td><td>75.1</td></tr></table>
|
| 208 |
+
|
| 209 |
+
(C1 and C2). We select ObjDesc [70] and Gen6D [42] for the comparison baseline. The results show that our method is capable of achieving a more accurate initial pose because we search within a multi-scale pose hypothesis space, whereas the baseline method only selects the most similar candidate from the reference image as the initial pose.
|
| 210 |
+
|
| 211 |
+
Ablation study on the pose refiner. To highlight the advantages of our 3DGS-based refiner for unseen objects over other 6D pose estimation methods, such as those used
|
| 212 |
+
|
| 213 |
+
Table 3. Comparison with Baselines on OnePose-LowTexture. We denote the methods relying on an GT object mask as 'GT-Mask'.
|
| 214 |
+
|
| 215 |
+
<table><tr><td rowspan="2">Row</td><td rowspan="2">Method</td><td colspan="3">LM</td></tr><tr><td>\( AR_{V} \, SD \)</td><td>\( AR_{M} \, SSD \)</td><td>\( AR_{M} \, SPD \)</td></tr><tr><td>A0</td><td>iG-6DoF</td><td>0.549</td><td>0.689</td><td>0.853</td></tr><tr><td>B1</td><td>A0: GS refiner→Gen6D refiner</td><td>0.538</td><td>0.672</td><td>0.812</td></tr><tr><td>B2</td><td>A0: GS refiner→DeepIM refiner</td><td>0.512</td><td>0.638</td><td>0.779</td></tr><tr><td>C1</td><td>A0: Pose Estimator → Objdesc selector</td><td>0.424</td><td>0.503</td><td>0.637</td></tr><tr><td>C2</td><td>A0: Pose Estimator → Gen6D selector</td><td>0.432</td><td>0.511</td><td>0.669</td></tr><tr><td>D1</td><td>A0: w/o data augmentation</td><td>0.521</td><td>0.624</td><td>0.801</td></tr><tr><td>D2</td><td>B1: w/o data augmentation</td><td>0.501</td><td>0.613</td><td>0.786</td></tr><tr><td>D3</td><td>B2: w/o data augmentation</td><td>0.478</td><td>0.601</td><td>0.732</td></tr><tr><td>E0</td><td>A0: \( N_r \rightarrow 16 \)</td><td>0.432</td><td>0.492</td><td>0.766</td></tr><tr><td>E1</td><td>A0: \( N_r \rightarrow 32 \)</td><td>0.446</td><td>0.624</td><td>0.789</td></tr><tr><td>E2</td><td>A0: \( N_r \rightarrow 64 \)</td><td>0.533</td><td>0.657</td><td>0.834</td></tr><tr><td>E3</td><td>A0: \( N_r \rightarrow 128 \)</td><td>0.587</td><td>0.712</td><td>0.866</td></tr></table>
|
| 216 |
+
|
| 217 |
+
Table 4. Ablation study under BOP setup on LM dataset.
|
| 218 |
+
|
| 219 |
+
in Gen6D and DeepIM [39, 42], we present results in Table 4 (B1 and B2). For the baseline refiner, DeepIM [39], we treat the reference image selected by our selector as the rendered image and use DeepIM to match it with the query image to update the pose. It is important to note that further refinement using additional iterations of DeepIM is not feasible, as there is no object model available to render a new image based on the updated pose. All refiners, including DeepIM, Gen6D, and our 3DGS-based refiner, are trained on the same dataset. The results indicate that our 3DGS-based refiner demonstrates superior generalization capability.
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Input Image
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Pred. Mask
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
3DGS Render
|
| 245 |
+
Figure 6. Qualitative results captured by us in real-world scenes. More visual results, discussion and analysis are provided in the supplementary material.
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
6D Pose
|
| 249 |
+
|
| 250 |
+
<table><tr><td rowspan="2">Metrics</td><td rowspan="2">Method</td><td colspan="5">Object Name</td><td rowspan="2">avg.</td></tr><tr><td>Chair</td><td>PlugEN</td><td>Piggy</td><td>Scissors</td><td>TFormer</td></tr><tr><td rowspan="5">ADD-0.1d</td><td>ObjDesc [70]</td><td>3.50</td><td>5.14</td><td>14.07</td><td>1.25</td><td>7.54</td><td>8.55</td></tr><tr><td>Gen6D w/o Ref.</td><td>14.00</td><td>7.48</td><td>39.70</td><td>16.81</td><td>11.51</td><td>17.90</td></tr><tr><td>Gen6D w Ref.</td><td>61.50</td><td>19.63</td><td>75.38</td><td>32.76</td><td>62.70</td><td>50.39</td></tr><tr><td>Ours w/o Ref.</td><td>46.32</td><td>17.93</td><td>71.84</td><td>29.57</td><td>55.92</td><td>44.32</td></tr><tr><td>Ours w Ref.</td><td>66.83</td><td>32.61</td><td>79.84</td><td>40.35</td><td>60.81</td><td>56.10</td></tr><tr><td rowspan="5">Proj2D</td><td>ObjDesc [70]</td><td>4.00</td><td>10.75</td><td>4.52</td><td>18.53</td><td>8.33</td><td>9.23</td></tr><tr><td>Gen6D w/o Ref.</td><td>11.50</td><td>40.65</td><td>33.17</td><td>34.05</td><td>64.29</td><td>36.73</td></tr><tr><td>Gen6D w Ref.</td><td>55.00</td><td>72.90</td><td>92.96</td><td>93.53</td><td>98.81</td><td>82.64</td></tr><tr><td>Ours w/o Ref.</td><td>48.91</td><td>65.93</td><td>84.6</td><td>81.34</td><td>81.61</td><td>72.49</td></tr><tr><td>Ours w Ref.</td><td>66.83</td><td>79.64</td><td>95.11</td><td>92.18</td><td>97.92</td><td>86.34</td></tr></table>
|
| 251 |
+
|
| 252 |
+
Table 5. Performance on the GenMOP dataset. "Ours w/o Ref." means not using the pose refiner in the iG-6DoF estimator.
|
| 253 |
+
|
| 254 |
+
ities on unseen objects compared to DeepIM and Gen6D.
|
| 255 |
+
|
| 256 |
+
Ablation study on data augmentation. To demonstrate the impact of our data augmentation module, we selected B0, B1, and B2 as baselines and compared the quantitative results before and after removing the data augmentation module. As shown in Table 4 (D1, D2 and D3), the results indicate that our data augmentation module significantly improves overall performance.
|
| 257 |
+
|
| 258 |
+
Ablation study on number of reference images. Finally, we evaluated the impact of the number of reference images on our method's performance by setting the reference image count to 16, 32, 64, and 128 in Table 4(E0 to E3). As expected, the model's performance improves with an increasing number of reference images, aligning with our
|
| 259 |
+
|
| 260 |
+
intuition. Thanks to the effectiveness of our data augmentation module, even with a smaller number of reference images, our method still achieves commendable results.
|
| 261 |
+
|
| 262 |
+
Runtime. iG-6DoF processes each image (resolution $480 \times 640$ ) in approximately 0.5 seconds on a desktop equipped with an Intel Xeon Silver 4310 CPU @ 2.10GHz and an Nvidia GeForce RTX 3090 GPU. This includes 0.12 seconds for object detection, 0.01 seconds for initial pose estimation, and 0.4 seconds for pose refinement.
|
| 263 |
+
|
| 264 |
+
# 5. Conclusion
|
| 265 |
+
|
| 266 |
+
In this paper, we introduced a novel end-to-end pose estimation method based on 3D Gaussian Splatting without the object's CAD model. Our method demonstrates strong generalization capabilities, effectively estimating the pose of unseen objects with only a set of reference images. Unlike previous work, which always relies on precise 3D models, additional supervisory data, and struggles with significant object translations or rotations, our method is robust and versatile. Our method consistently achieves state-of-the-art performance, as evidenced by results on the widely used benchmarks. Furthermore, we conducted experiments on our captured scenes, validating our method's generalization potential and efficacy in diverse scenarios.
|
| 267 |
+
|
| 268 |
+
# 6. Acknowledgments
|
| 269 |
+
|
| 270 |
+
This work is partially supported by National Nature Science Foundation of China (No.62372336 and No.62172309).
|
| 271 |
+
|
| 272 |
+
# References
|
| 273 |
+
|
| 274 |
+
[1] Adel Ahmadyan and Liangkai Zhang. Objectron: A large scale dataset of object-centric videos in the wild with pose annotations. In CVPR, 2021. 6
|
| 275 |
+
[2] Phil Ammirato, Cheng-Yang Fu, Mykhailo Shvets, Jana Kosecka, and Alexander C Berg. Target driven instance detection. arXiv preprint arXiv:1803.04610, 2018. 4
|
| 276 |
+
[3] Apple. Arkit. https://developer.apple.com/augmentedreality/, 2017.3
|
| 277 |
+
[4] Gil Avraham, Julian Straub, Tianwei Shen, Tsun-Yi Yang, Hugo Germain, Chris Sweeney, Vasileios Balntas, David Novotny, Daniel DeTone, and Richard Newcombe. Nerfels: renderable neural codes for improved camera pose estimation. In CVPR, pages 5061-5070, 2022. 3
|
| 278 |
+
[5] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In CVPR, pages 5470–5479, 2022. 3
|
| 279 |
+
[6] P.J. Besl and Neil D. McKay. A method for registration of 3-d shapes. IEEE TPAMI, 1992. 1
|
| 280 |
+
[7] Wenjing Bian, Zirui Wang, Kejie Li, Jia-Wang Bian, and Victor Adrian Prisacariu. Nope-nerf: Optimising neural radiance field with no pose prior. In CVPR, pages 4160–4169, 2023. 3
|
| 281 |
+
[8] Dingding Cai and Janne Heikkila. Gs-posed: Cascaded framework for generalizable segmentation-based 6d object pose estimation. arXiv preprint arXiv:2403.10683, 2024. 1, 6, 7
|
| 282 |
+
[9] Ming Cai and Ian Reid. Reconstruct locally, localize globally: A model free method for object pose estimation. In CVPR, 2020. 6
|
| 283 |
+
[10] Tuo Cao and Fei Luo. Dgecn: A depth-guided edge convolutional network for end-to-end 6d pose estimation. In CVPR, 2022. 1
|
| 284 |
+
[11] Tuo Cao, Wenxiao Zhang, Yanping Fu, Shengjie Zheng, Fei Luo, and Chunxia Xiao. Dgecn++: A depth-guided edge convolutional network for end-to-end 6d pose estimation via attention mechanism. IEEE Transactions on Circuits and Systems for Video Technology, 34(6):4214-4228, 2023. 1
|
| 285 |
+
[12] Andrea Caraffa, Davide Boscaini, Amir Hamza, and Fabio Poiesi. Freeze: Training-free zero-shot 6d pose estimation with geometric and vision foundation models. In European Conference on Computer Vision, pages 414-431. Springer, 2024. 2
|
| 286 |
+
[13] Pedro Castro and Tae-Kyun Kim. Posemapper: One-shot 6d object pose estimation by deep feature matching. In ICCVW, 2023. 1, 2
|
| 287 |
+
[14] Jianqiu Chen and Mingshan Sun. Zeropose: Cad-model-based zero-shot pose estimation. arXiv preprint arXiv:2305.17934, 2023. 1, 2
|
| 288 |
+
[15] Kai Chen and Qi Dou. Sgpa: Structure-guided prior adaptation for category-level 6d object pose estimation. In ICCV, 2021. 1
|
| 289 |
+
|
| 290 |
+
[16] Shin-Fang Chng, Sameera Ramasinghe, Jamie Sherrah, and Simon Lucey. Garf: gaussian activated radiance fields for high fidelity reconstruction and pose estimation. arXiv eprints, pages arXiv-2204, 2022. 3
|
| 291 |
+
[17] Maximilian Denninger, Martin Sundermeyer, Dominik Winkelbauer, Youssef Zidan, Dmitry Olefir, Mohamad Elbadrawy, Ahsan Lodhi, and Harinandan Katam. Blenderproc. arXiv preprint arXiv:1911.01911, 2019. 6
|
| 292 |
+
[18] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. In 2022 International Conference on Robotics and Automation (ICRA), pages 2553-2560. IEEE, 2022. 6
|
| 293 |
+
[19] Zhaoxin Fan and Zhenbo Song. Object level depth reconstruction for category level 6d object pose estimation from monocular rgb image. In ECCV, 2022. 1
|
| 294 |
+
[20] Martin A. Fischler and Robert C. Bolles. Random sample consensus. COMMUN ACM, 1981. 1
|
| 295 |
+
[21] Minghao Gou and Haolin Pan. Unseen object 6d pose estimation: A benchmark and baselines. arXiv preprint arXiv:2206.11808, 2022. 1
|
| 296 |
+
[22] Frederik Hagelskjaer and Rasmus Laurvig Haugaard. Keymatchnet: Zero-shot pose estimation in 3d point clouds by generalized keypoint matching. arXiv preprint arXiv:2303.16102, 2023. 1
|
| 297 |
+
[23] Huasong Han, Kaixuan Zhou, Xiaoxiao Long, Yusen Wang, and Chunxia Xiao. Ggs: Generalizable gaussian splatting for lane switching in autonomous driving. arXiv preprint arXiv:2409.02382, 2024. 3
|
| 298 |
+
[24] Kaiming He and Georgia Gkioxari. Mask r-cnn. In ICCV, 2017. 2
|
| 299 |
+
[25] Xingyi He and Jiaming Sun. Onepose++: Keypoint-free one-shot object pose estimation without cad models. In NeurIPS, 2022. 1, 2, 6
|
| 300 |
+
[26] Yisheng He and Wei Sun. Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation. In CVPR, 2020. 1
|
| 301 |
+
[27] Yisheng He and Yao Wang. Fs6d: Few-shot 6d pose estimation of novel objects. In CVPR, 2022. 1, 2
|
| 302 |
+
[28] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. ACM TOG, 37(6):1-15, 2018. 3
|
| 303 |
+
[29] Stefan Hinterstoisser and Vincent Lepetit. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In ACCV, 2012. 6, 7
|
| 304 |
+
[30] Tomas Hodan and Martin Sundermeyer. Bop challenge 2023 on detection, segmentation and pose estimation of seen and unseen rigid objects. arXiv preprint arXiv:2403.09799, 2024.6
|
| 305 |
+
[31] Lin Huang and Tomas Hodan. Neural correspondence field for object pose estimation. In ECCV, 2022. 3
|
| 306 |
+
[32] Corsetti Jaime and Boscaini Davide. Open-vocabulary object 6d pose estimation. In CVPR, 2024. 2
|
| 307 |
+
[33] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time
|
| 308 |
+
|
| 309 |
+
radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 2, 3, 5
|
| 310 |
+
[34] Alexander Kirillov and Eric Mintun. Segment anything. In ICCV, 2023. 2
|
| 311 |
+
[35] Yann Labbe and Justin Carpentier. Cosypos: Consistent multi-view multi-object 6d pose estimation. In ECCV, 2020. 5
|
| 312 |
+
[36] Yann Labbe and Lucas Manuelli. Megapore: 6d pose estimation of novel objects via render & compare. In CoRL, 2022. 1, 3, 5, 6
|
| 313 |
+
[37] JongMin Lee and Yohann Cabon. Mfos: Model-free & one-shot object pose estimation. In AAAI, 2024. 6
|
| 314 |
+
[38] Fu Li and Shishir Reddy Vutukur. Nerf-posed: A first-reconstruct-then-regress approach for weakly-supervised 6d object pose estimation. In ICCV, 2023. 3
|
| 315 |
+
[39] Yi Li and Gu Wang. Deepim: Deep iterative matching for 6d pose estimation. In ECCV, 2018. 5, 7
|
| 316 |
+
[40] Jiehong Lin and Lihua Liu. Sam-6d: Segment anything model meets zero-shot 6d object pose estimation. In CVPR, 2024. 2
|
| 317 |
+
[41] Jian Liu, Wei Sun, Hui Yang, Zhiwen Zeng, Chongpei Liu, Jin Zheng, Xingyu Liu, Hossein Rahmani, Nicu Sebe, and Ajmal Mian. Deep learning-based object pose estimation: A comprehensive survey. arXiv preprint arXiv:2405.07801, 2024. 1
|
| 318 |
+
[42] Yuan Liu and Yilin Wen. Gen6d: Generalizable model-free 6-dof object pose estimation from rgb images. In ECCV, 2022. 1, 2, 4, 6, 7
|
| 319 |
+
[43] Yuan Liu, Zehong Shen, Zhixuan Lin, Sida Peng, Hujun Bao, and Xiaowei Zhou. Gift: Learning transformation-invariant dense visual descriptors via group cnns. Advances in Neural Information Processing Systems, 32, 2019. 4
|
| 320 |
+
[44] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 3, 4
|
| 321 |
+
[45] Maxime Oquab and Timothee Darcet. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 2
|
| 322 |
+
[46] Evin Pinar Örnek and Yann Labbé. Foundpose: Unseen object pose estimation with foundation features. arXiv preprint arXiv:2311.18809, 2023. 2
|
| 323 |
+
[47] Panwang Pan and Zhiwen Fan. Learning to estimate 6dof pose from limited data: A few-shot, generalizable approach using rgb images. In 3DV, 2024. 6, 7
|
| 324 |
+
[48] Sida Peng and Yuan Liu. Pvnet: Pixel-wise voting network for 6dof pose estimation. In CVPR, 2019. 1, 6
|
| 325 |
+
[49] Giorgia Pitteri and Aurélie Bugeau. 3d object detection and pose estimation of unseen objects in color images with local surface embeddings. In ACCV, 2020. 2
|
| 326 |
+
[50] Giorgia Pitteri and Slobodan Ilic. Cornet: Generic 3d corners for 6d pose estimation of new objects without retraining. In ICCVW, 2019. 1, 2
|
| 327 |
+
[51] Charles R Qi and Hao Su. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. 4
|
| 328 |
+
|
| 329 |
+
[52] Jiongming Qin, Fei Luo, Tuo Cao, Wenju Xu, and Chunxia Xiao. Hs-surf: A novel high-frequency surface shell radiance field to improve large-scale scene rendering. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 6006-6014, 2024. 3
|
| 330 |
+
[53] Zheng Qin and Hao Yu. Geometric transformer for fast and robust point cloud registration. In CVPR, 2022. 2
|
| 331 |
+
[54] Paul-Edouard Sarlin and Daniel DeTone. Superglue: Learning feature matching with graph neural networks. In CVPR, 2020. 2
|
| 332 |
+
[55] Ivan Shugurov and Fu Li. Osop: A multi-stage one shot object pose estimation framework. In CVPR, 2022. 6, 7
|
| 333 |
+
[56] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4
|
| 334 |
+
[57] Edgar Sucar, Shikun Liu, Joseph Ortiz, and Andrew J Davison. imap: Implicit mapping and positioning in real-time. In ICCV, pages 6229-6238, 2021. 3
|
| 335 |
+
[58] Jiaming Sun and Zehong Shen. Loftr: Detector-free local feature matching with transformers. In CVPR, 2021. 2, 6
|
| 336 |
+
[59] Jiaming Sun and Zihao Wang. Onepose: One-shot object pose estimation without cad models. In CVPR, 2022. 1, 2, 3, 6
|
| 337 |
+
[60] Yuan Sun, Xuan Wang, Yunfan Zhang, Jie Zhang, Caigui Jiang, Yu Guo, and Fei Wang. icomma: Inverting 3d gaussians splatting for camera pose estimation via comparing and matching. arXiv preprint arXiv:2312.09031, 2023. 3
|
| 338 |
+
[61] Chen Wang and Danfei Xu. Densefusion: 6d object pose estimation by iterative dense fusion. In CVPR, 2019. 1
|
| 339 |
+
[62] Gu Wang and Fabian Manhardt. Gdr-net: Geometry-guided direct regression network for monocular 6d object pose estimation. In CVPR, 2021. 1
|
| 340 |
+
[63] He Wang and Srinath Sridhar. Normalized object coordinate space for category-level 6d object pose and size estimation. In CVPR, 2019. 1
|
| 341 |
+
[64] Haiping Wang, Yuan Liu, Qingyong Hu, Bing Wang, Jianguo Chen, Zhen Dong, Yulan Guo, Wenping Wang, and Bisheng Yang. Roreg: Pairwise point cloud registration with oriented descriptors and local rotations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8):10376-10393, 2023. 4, 5
|
| 342 |
+
[65] Yusen Wang, Zongcheng Li, Yu Jiang, Kaixuan Zhou, Tuo Cao, Yanping Fu, and Chunxia Xiao. Neuralroom: Geometry-constrained neural implicit surfaces for indoor scene reconstruction. ACM Transactions on Graphics (TOG), 41(6):1-15, 2022. 3
|
| 343 |
+
[66] Yusen Wang, Kaixuan Zhou, Wenxiao Zhang, and Chunxia Xiao. Megasurf: Scalable large scene neural surface reconstruction. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 6414-6423, 2024. 3
|
| 344 |
+
[67] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, pages 1398-1402. IEEE, 2003. 6
|
| 345 |
+
[68] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6
|
| 346 |
+
|
| 347 |
+
[69] Bowen Wen, Wei Yang, Jan Kautz, and Stan Birchfield. Foundationpose: Unified 6d pose estimation and tracking of novel objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17868-17879, 2024. 3
|
| 348 |
+
[70] Paul Wohlhart and Vincent Lepetit. Learning descriptors for object recognition and 3d pose estimation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 7, 8
|
| 349 |
+
[71] Yu Xiang and Tanner Schmidt. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017. 1
|
| 350 |
+
[72] Lin Yen-Chen and Pete Florence. inerf: Inverting neural radiance fields for pose estimation. In IROS, 2021. 3
|
| 351 |
+
[73] Ruida Zhang and Yan Di. Ssp-posed: Symmetry-aware shape prior deformation for direct category-level object pose estimation. In IROS, 2022. 1
|
| 352 |
+
[74] Chen Zhao and Yinlin Hu. Locposenet: Robust location prior for unseen object pose estimation. In 3DV, 2024. 6, 7
|
| 353 |
+
[75] Heng Zhao and Shenxing Wei. Learning symmetry-aware geometry correspondences for 6d object pose estimation. In ICCV, 2023. 2
|
| 354 |
+
[76] Zihan Zhu, Songyou Peng, Viktor Larsson, Weiwei Xu, Hujun Bao, Zhaopeng Cui, Martin R Oswald, and Marc Pollefeys. Nice-slam: Neural implicit scalable encoding for slam. In CVPR, pages 12786-12796, 2022. 3
|
| 355 |
+
[77] Lu Zou and Zhangjin Huang. 6d-vit: Category-level 6d object pose estimation via transformer-based instance representation learning. IEEE TIP, 2022. 1
|
CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ecca9aa5181916346fc654b4b38e9cb3446806f343e839ebcc2a39232ef9685b
|
| 3 |
+
size 561908
|
CVPR/2025/iG-6DoF_ Model-free 6DoF Pose Estimation for Unseen Object via Iterative 3D Gaussian Splatting/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d749c4b608ae44c1851c850d821f7aaa01909c87c495ae488ea357c74bfb6f0a
|
| 3 |
+
size 443260
|
CVPR/2025/iSegMan_ Interactive Segment-and-Manipulate 3D Gaussians/41e89e4d-a900-4b4c-acdd-4a77ec356f1d_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a04a2dbf959e091659f0e65e181f1032c0c076517af495f48998ce336206e4c
|
| 3 |
+
size 100470
|
CVPR/2025/iSegMan_ Interactive Segment-and-Manipulate 3D Gaussians/41e89e4d-a900-4b4c-acdd-4a77ec356f1d_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f040a37ecac06acc793c17ca2a3c8611bb64d437e0b8d534d4ecc87aa70024e5
|
| 3 |
+
size 114085
|