Enhance model card: Add pipeline tag, links, visuals, and usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,46 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: image-to-3d
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction
|
| 7 |
+
|
| 8 |
+
This repository contains **DiffusionGS**, a novel single-stage 3D diffusion model for object generation and scene reconstruction from a single view. As presented in the paper, DiffusionGS directly outputs 3D Gaussian point clouds at each timestep to enforce view consistency and allows the model to generate robustly given prompt views of any directions, beyond object-centric inputs. It also features a scene-object mixed training strategy to improve capability and generality. Our method enjoys over 5× faster speed (~6s on an A100 GPU) compared to state-of-the-art methods.
|
| 9 |
+
|
| 10 |
+
* **Paper**: [Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction](https://huggingface.co/papers/2411.14384)
|
| 11 |
+
* **Project Page**: [https://caiyuanhao1998.github.io/project/DiffusionGS/](https://caiyuanhao1998.github.io/project/DiffusionGS/)
|
| 12 |
+
* **Code**: [https://github.com/caiyuanhao1998/Open-DiffusionGS](https://github.com/caiyuanhao1998/Open-DiffusionGS)
|
| 13 |
+
|
| 14 |
+
<div align="center">
|
| 15 |
+
<img src="https://huggingface.co/datasets/CaiYuanhao/DiffusionGS/resolve/main/img/abo.gif" width="24%" alt="abo">
|
| 16 |
+
<img src="https://huggingface.co/datasets/CaiYuanhao/DiffusionGS/resolve/main/img/gso.gif" width="24%" alt="gso">
|
| 17 |
+
<img src="https://huggingface.co/datasets/CaiYuanhao/DiffusionGS/resolve/main/img/real_img.gif" width="24%" alt="real_img">
|
| 18 |
+
<img src="https://huggingface.co/datasets/CaiYuanhao/DiffusionGS/resolve/main/img/wild.gif" width="24%" alt="wild">
|
| 19 |
+
</div>
|
| 20 |
+
|
| 21 |
+
<div align="center">
|
| 22 |
+
<img src="https://huggingface.co/datasets/CaiYuanhao/DiffusionGS/resolve/main/img/pipeline.png" width="80%" alt="DiffusionGS Pipeline">
|
| 23 |
+
</div>
|
| 24 |
+
|
| 25 |
+
## Quick Demo
|
| 26 |
+
|
| 27 |
+
For object-centric image-to-3D generation, a single-line script is provided to use the code:
|
| 28 |
+
|
| 29 |
+
```shell
|
| 30 |
+
python run.py
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
This code will automatically download the model checkpoints and config files from Hugging Face.
|
| 34 |
+
|
| 35 |
+
## Citation
|
| 36 |
+
|
| 37 |
+
If you find our work useful, please consider citing our paper:
|
| 38 |
+
|
| 39 |
+
```bibtex
|
| 40 |
+
@inproceedings{diffusiongs,
|
| 41 |
+
title={Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction},
|
| 42 |
+
author={Yuanhao Cai and He Zhang and Kai Zhang and Yixun Liang and Mengwei Ren and Fujun Luan and Qing Liu and Soo Ye Kim and Jianming Zhang and Zhifei Zhang and Yuqian Zhou and Yulun Zhang and Xiaokang Yang and Zhe Lin and Alan Yuille},
|
| 43 |
+
booktitle={ICCV},
|
| 44 |
+
year={2025}
|
| 45 |
+
}
|
| 46 |
+
```
|