xrenaa commited on
Commit
1d54982
·
verified ·
1 Parent(s): 047a30b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -1
README.md CHANGED
@@ -1,3 +1,94 @@
1
  ---
2
  license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ ---
4
+ # Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
5
+
6
+ **[Paper](), [Project Page](https://research.nvidia.com/labs/toronto-ai/lyra/)**
7
+
8
+ [Sherwin Bahmani](https://sherwinbahmani.github.io/),
9
+ [Tianchang Shen](https://www.cs.toronto.edu/~shenti11/),
10
+ [Jiawei Ren](https://jiawei-ren.github.io/),
11
+ [Jiahui Huang](https://huangjh-pub.github.io/),
12
+ [Yifeng Jiang](https://cs.stanford.edu/~yifengj/),
13
+ [Haithem Turki](https://haithemturki.com/),
14
+ [Andrea Tagliasacchi](https://theialab.ca/),
15
+ [David B. Lindell](https://davidlindell.com/),
16
+ [Zan Gojcic](https://zgojcic.github.io/),
17
+ [Sanja Fidler](https://www.cs.utoronto.ca/~fidler/),
18
+ [Huan Ling](https://www.cs.toronto.edu/~linghuan/),
19
+ [Jun Gao](https://www.cs.toronto.edu/~jungao/),
20
+ [Xuanchi Ren](https://xuanchiren.com/) <br>
21
+
22
+ ## Dataset Description:
23
+
24
+ The PhysicalAI-SpatialIntelligence-Lyra-SDG Dataset is a multi-view 3D and 4D dataset generated using [GEN3C](https://github.com/nv-tlabs/GEN3C).
25
+ The 3D reconstruction setup uses 59,031 images, while the 4D setup has 7,378 videos. All the data are from diverse text prompts, spanning various scenarios such as indoor and outdoor environments, humans, animals, and both realistic and imaginative content. We synthesize 6 camera trajectories for each image (3D) or video (4D), yielding 354,186 videos for the 3D and 44,268 videos for the 4D.
26
+ It contains videos in RGB and camera poses and depth of the videos.
27
+
28
+ This dataset is ready for commercial use.
29
+
30
+ ## Dataset Owner(s):
31
+ NVIDIA Corporation
32
+
33
+ ## Dataset Creation Date:
34
+ 2025/09/23
35
+
36
+ ## License/Terms of Use:
37
+ [Visit the NVIDIA Legal Release Process](https://nvidia.sharepoint.com/sites/ProductLegalSupport) for instructions on getting legal support for a license selection:
38
+ https://docs.google.com/spreadsheets/d/1e1K8nsMV9feowjmgXhdfa0qo-oGJNlnsBc1Qhwck7vU/edit?usp=sharing
39
+
40
+ ## Intended Usage:
41
+ Researchers and academics working in spatial intelligence problems can use it to train AI models for multi-view video generation or reconstruction.
42
+
43
+ ## Dataset Characterization:
44
+ ** Data Collection Method<br>
45
+ [Synthetic]
46
+
47
+ ** Labeling Method<br>
48
+ [Synthetic]
49
+
50
+ ## Dataset Format:
51
+ RGB in mp4, Camera pose in .npz, Depth in zip format
52
+
53
+ ## Dataset Quantification:
54
+ The 3D reconstruction setup has 59,031 multi-view examples, while the 4D setup has 7,378 multi-view examples. For each multi-view example, we have 6 views.
55
+ For each view, we have videos in Red, Green, Blue (RGB) and camera poses and depth of the videos.
56
+
57
+
58
+ | Field | Format |
59
+ |-------------|--------|
60
+ | Video | mp4 |
61
+ | Camera pose | .npz |
62
+ | Depth | .zip |
63
+
64
+ Storage: 25TB
65
+
66
+ ## Reference(s):
67
+
68
+ Please refer to https://github.com/nv-tlabs/lyra for how to use this dataset.
69
+
70
+ ## Ethical Considerations:
71
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
72
+
73
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
74
+
75
+ ## Citation
76
+ ```
77
+ @inproceedings{bahmani2025lyra,
78
+ title={Lyra: Generative 3D Scene Reconstruction via Self-Distillation with Video Diffusion Models},
79
+ author={Bahmani, Sherwin and Shen, Tianchang and Ren, Jiawei and Huang, Jiahui and Jiang, Yifeng and Turki, Haithem and Tagliasacchi, Andrea and Lindell, David B. and Gojcic, Zan and Fidler, Sanja and Ling, Huan and Gao, Jun and Ren, Xuanchi},
80
+ booktitle={arXiv 2025},
81
+ year={2025}
82
+ }
83
+ ```
84
+
85
+ ```
86
+ @inproceedings{ren2025gen3c,
87
+ title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control},
88
+ author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and
89
+ Lu, Yifan and Nimier-David, Merlin and Müller, Thomas and Keller, Alexander and
90
+ Fidler, Sanja and Gao, Jun},
91
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
92
+ year={2025}
93
+ }
94
+ ```