CaiYuanhao commited on
Commit
c3378d3
verified
1 Parent(s): fca7419

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -1,3 +1,100 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: image-to-3d
6
+ modalities:
7
+ - image
8
+ - point clouds
9
+ - mesh
10
+ arxiv: 2411.14384
11
+ ---
12
+
13
+ # [ICCV 2025] DiffusionGS: Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction
14
+
15
+ ## Model Description
16
+
17
+ These three models are trained for image-to-3D generation on object- and scene-level with the spatial resolution of 256x256 and 512x512. For object-level 3D generation,
18
+ mesh exportation is also supported. Here are some generated examples:
19
+
20
+ 路 (a) Object-level Generation
21
+
22
+ <p align="center">
23
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/abo.gif" width="24%" alt="abo">
24
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/gso.gif" width="24%" alt="gso">
25
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/real_img.gif" width="24%" alt="real_img">
26
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/wild.gif" width="24%" alt="wild">
27
+ </p>
28
+ <p align="center">
29
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/sd_2.gif" width="24%" alt="sd_2">
30
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/sd_1.gif" width="24%" alt="sd_1">
31
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/flux_1.gif" width="24%" alt="flux_1">
32
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/green_man.gif" width="24%" alt="green_man">
33
+ </p>
34
+
35
+ 路 (b) Scene-level Generation
36
+
37
+ <p align="center">
38
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/plaza.gif" width="50%" alt="plaza">
39
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/town.gif" width="48%" alt="town">
40
+ </p>
41
+ <p align="center">
42
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/cliff.gif" width="49.5%" alt="cliff">
43
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/art_gallery.gif" width="48.5%" alt="art_gallery">
44
+ </p>
45
+
46
+ 路 (c) Comparison with Hunyuan3D-v2.5
47
+
48
+ The first row is the prompt image. The second row is Hunyuan3D-v2.5. The third row is our DiffusionGS.
49
+
50
+ Our method generates better results while enjoying 7.5x faster inference speed.
51
+
52
+ <p align="center">
53
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/1.png" width="32%" alt="1">
54
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/2.jpg" width="32%" alt="2">
55
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/3.png" width="32%" alt="3">
56
+ </p>
57
+ <p align="center">
58
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/hunyuan_1.gif" width="32%" alt="hunyuan_1">
59
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/hunyuan_2.gif" width="32%" alt="hunyuan_2">
60
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/hunyuan_3.gif" width="32%" alt="hunyuan_3">
61
+ </p>
62
+ <p align="center">
63
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/ours_1.gif" width="32%" alt="ours_1">
64
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/ours_2.gif" width="32%" alt="ours_2">
65
+ <img src="https:/raw.githubusercontent.com/caiyuanhao1998/Open-DiffusionGS/img/ours_3.gif" width="32%" alt="ours_3">
66
+ </p>
67
+
68
+ ## Github Code Link
69
+
70
+ Please refer to our GitHub repo for more detailed instructions on using our code and models.
71
+
72
+ https://github.com/caiyuanhao1998/Open-DiffusionGS/
73
+
74
+
75
+ ## Project Page Link
76
+
77
+ For more video and interactive generation results, please refer to our project page:
78
+
79
+ https://caiyuanhao1998.github.io/project/DiffusionGS/
80
+
81
+
82
+ ## Arxiv Paper Link
83
+
84
+ For more technical details, please refer to our ICCV 2025 paper:
85
+
86
+ https://arxiv.org/abs/2411.14384
87
+
88
+
89
+ ## Citation
90
+
91
+ If you find our code, data, and models useful, please consider citing our paper:
92
+
93
+ ```sh
94
+ @inproceedings{diffusiongs,
95
+ title={Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction},
96
+ author={Yuanhao Cai and He Zhang and Kai Zhang and Yixun Liang and Mengwei Ren and Fujun Luan and Qing Liu and Soo Ye Kim and Jianming Zhang and Zhifei Zhang and Yuqian Zhou and Yulun Zhang and Xiaokang Yang and Zhe Lin and Alan Yuille},
97
+ booktitle={ICCV},
98
+ year={2025}
99
+ }
100
+ ```