Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,8 @@
|
|
| 4 |
|
| 5 |
# Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
|
| 6 |
|
|
|
|
|
|
|
| 7 |
[Sherwin Bahmani](https://sherwinbahmani.github.io/),
|
| 8 |
[Tianchang Shen](https://www.cs.toronto.edu/~shenti11/),
|
| 9 |
[Jiawei Ren](https://jiawei-ren.github.io/),
|
|
@@ -18,6 +20,7 @@
|
|
| 18 |
[Jun Gao](https://www.cs.toronto.edu/~jungao/),
|
| 19 |
[Xuanchi Ren](https://xuanchiren.com/) <br>
|
| 20 |
|
|
|
|
| 21 |
### Description:
|
| 22 |
Lyra is a 3D / 4D feed-forward 3D Gaussian Splatting (3DGS) reconstruction model. We achieve this by distilling a pre-trained video diffusion model into a feed-forward 3D Gaussian Splatting (3DGS) generator. Lyra circumvents the need for 3D datasets or model fine-tuning by leveraging the inherent 3D consistency of video outputs to align 2D renderings with a 3DGS decoder. By using the generated synthetic multi-view data, we train a decoder to operate directly in the video model's latent space to produce 3D Gaussians. This framework enables real-time rendering and establishes a new state of the art for 3D / 4D scene generation, supporting both single-view and video inputs.
|
| 23 |
|
|
|
|
| 4 |
|
| 5 |
# Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
|
| 6 |
|
| 7 |
+
**[Paper](), [Project Page](https://research.nvidia.com/labs/toronto-ai/lyra/)**
|
| 8 |
+
|
| 9 |
[Sherwin Bahmani](https://sherwinbahmani.github.io/),
|
| 10 |
[Tianchang Shen](https://www.cs.toronto.edu/~shenti11/),
|
| 11 |
[Jiawei Ren](https://jiawei-ren.github.io/),
|
|
|
|
| 20 |
[Jun Gao](https://www.cs.toronto.edu/~jungao/),
|
| 21 |
[Xuanchi Ren](https://xuanchiren.com/) <br>
|
| 22 |
|
| 23 |
+
|
| 24 |
### Description:
|
| 25 |
Lyra is a 3D / 4D feed-forward 3D Gaussian Splatting (3DGS) reconstruction model. We achieve this by distilling a pre-trained video diffusion model into a feed-forward 3D Gaussian Splatting (3DGS) generator. Lyra circumvents the need for 3D datasets or model fine-tuning by leveraging the inherent 3D consistency of video outputs to align 2D renderings with a 3DGS decoder. By using the generated synthetic multi-view data, we train a decoder to operate directly in the video model's latent space to produce 3D Gaussians. This framework enables real-time rendering and establishes a new state of the art for 3D / 4D scene generation, supporting both single-view and video inputs.
|
| 26 |
|