Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,12 +8,75 @@ pinned: false
|
|
| 8 |
---
|
| 9 |
|
| 10 |
<h1 align='center'>SpatialVID: A Large Scale Video Dataset with Spatial Annotations</h1>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
<br>
|
| 12 |
<div align="center">
|
| 13 |
<a href="https://nju-pcalab.github.io/projects/openvid/"><img src="https://img.shields.io/static/v1?label=SpatialVID&message=Project&color=purple"></a>
|
| 14 |
<a href="https://arxiv.org/abs/2407.02371"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv&color=red&logo=arxiv"></a>
|
| 15 |
-
<a href="https://github.com/opencam-vid/SpatialVid"><img src="https://img.shields.io/static/v1?label=Code&message=
|
| 16 |
<a href="https://huggingface.co/SpatialVID"><img src="https://img.shields.io/static/v1?label=Dataset&message=HuggingFace&color=yellow&logo=huggingface"></a>
|
| 17 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
<h1 align='center'>SpatialVID: A Large Scale Video Dataset with Spatial Annotations</h1>
|
| 11 |
+
<div align='center'>
|
| 12 |
+
<a href='#' target='_blank'>Jiahao Wang</a><sup>1</sup>
|
| 13 |
+
<a href='https://github.com/FelixYuan-YF' target='_blank'>Yufeng Yuan</a><sup>1</sup>
|
| 14 |
+
<a href='#' target='_blank'>Rujie Zheng</a><sup>1</sup>
|
| 15 |
+
<a href='https://linyou.github.io' target='_blank'>Youtian Lin</a><sup>1</sup>
|
| 16 |
+
<a href='#' target='_blank'>Yi Zhang</a><sup>1</sup>
|
| 17 |
+
<a href='#' target='_blank'>Yajie Bao</a><sup>1</sup>
|
| 18 |
+
<a href='https://linzhuo.xyz' target='_blank'>Lin-Zhuo Chen</a><sup>1</sup>
|
| 19 |
+
</div>
|
| 20 |
+
<div align='center'>
|
| 21 |
+
<a href='#' target='_blank'>Yanxi Zhou</a><sup>1</sup>
|
| 22 |
+
<a href='#' target='_blank'>Xiaoxiao Long</a><sup>1</sup>
|
| 23 |
+
<a href='#' target='_blank'>Hao Zhu</a><sup>1</sup>
|
| 24 |
+
<a href='http://zhaoxiangzhang.net/' target='_blank'>Zhaoxiang Zhang</a><sup>2</sup>
|
| 25 |
+
<a href='#' target='_blank'>Xun Cao</a><sup>1</sup>
|
| 26 |
+
<a href='https://yoyo000.github.io/' target='_blank'>Yao Yao</a><sup>1†</sup>
|
| 27 |
+
</div>
|
| 28 |
+
<div align='center'>
|
| 29 |
+
<sup>1</sup>Nanjing University <sup>2</sup>Institute of Automation, Chinese Academy of Science
|
| 30 |
+
</div>
|
| 31 |
<br>
|
| 32 |
<div align="center">
|
| 33 |
<a href="https://nju-pcalab.github.io/projects/openvid/"><img src="https://img.shields.io/static/v1?label=SpatialVID&message=Project&color=purple"></a>
|
| 34 |
<a href="https://arxiv.org/abs/2407.02371"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv&color=red&logo=arxiv"></a>
|
| 35 |
+
<a href="https://github.com/opencam-vid/SpatialVid"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>
|
| 36 |
<a href="https://huggingface.co/SpatialVID"><img src="https://img.shields.io/static/v1?label=Dataset&message=HuggingFace&color=yellow&logo=huggingface"></a>
|
| 37 |
</div>
|
| 38 |
+
<p align="center">
|
| 39 |
+
<img src="assets/overview.png" height=400>
|
| 40 |
+
</p>
|
| 41 |
+
|
| 42 |
+
## Abstract
|
| 43 |
+
|
| 44 |
+
Significant progress has been made in spatial intelligence, spanning both spatial reconstruction and world exploration. However, the scalability and real-world fidelity of current models remain severely constrained by the scarcity of large-scale, high-quality training data. While several datasets provide camera pose information, they are typically limited in scale, diversity, and annotation richness, particularly for dynamic scenes with realistic camera motion. To address this gap, we collect a large corpus of raw video with natural camera movement, providing the foundation for constructing a dataset with unique scale and diversity. In this work, we introduce **SpatialVID**, a large-scale dynamic spatial dataset explicitly designed to provide expressive annotations for this purpose. Through a hierarchical filtering pipeline, we process more than **21,000 hours** of collected raw video into **2.7 million clips**, totaling **7,089 hours** of dynamic content. A subsequent annotation pipeline enriches these clips with detailed spatial and semantic information, including camera poses, depth maps, dynamic masks, structured captions, and labels for camera motion and scene composition.
|
| 45 |
+
|
| 46 |
+
## Demonstration
|
| 47 |
+
|
| 48 |
+
<table class="center">
|
| 49 |
+
<!-- Row 1 -->
|
| 50 |
+
<tr>
|
| 51 |
+
<td width=25% style="border: none"><img src="assets/f40492a05b8df569687c735e33c295efce06517630489d06f7f95f68527c4674_010804_010895.gif"></td>
|
| 52 |
+
<td width=25% style="border: none"><img src="assets/4d092de5f215e4bc41b7c773a77787289e0053f2f7645c5801fd2d907ebac137_016564_016789.gif"></td>
|
| 53 |
+
<td width=25% style="border: none"><img src="assets/360e7333d4a8302bacfddd6b308606d17342fa23233a72c545fac4fba812ad59_030894_031095.gif"></td>
|
| 54 |
+
<td width=25% style="border: none">A middle-aged man with glasses stands outside a residential building, wearing a gray polo shirt, then suddenly changes to a dark gray polo shirt, with a concerned expression.</td>
|
| 55 |
+
</tr>
|
| 56 |
+
|
| 57 |
+
<!-- Row 2 -->
|
| 58 |
+
<tr>
|
| 59 |
+
<td width=25% style="border: none"><img src="assets/f40492a05b8df569687c735e33c295efce06517630489d06f7f95f68527c4674_010804_010895.gif"></td>
|
| 60 |
+
<td width=25% style="border: none"><img src="assets/4d092de5f215e4bc41b7c773a77787289e0053f2f7645c5801fd2d907ebac137_016564_016789.gif"></td>
|
| 61 |
+
<td width=25% style="border: none"><img src="assets/360e7333d4a8302bacfddd6b308606d17342fa23233a72c545fac4fba812ad59_030894_031095.gif"></td>
|
| 62 |
+
<td width=25% style="border: none">A middle-aged man with glasses stands outside a residential building, wearing a gray polo shirt, then suddenly changes to a dark gray polo shirt, with a concerned expression.</td>
|
| 63 |
+
</tr>
|
| 64 |
+
</table>
|
| 65 |
+
|
| 66 |
+
## Dataset Statistics
|
| 67 |
+
|
| 68 |
+
## License of SpatialVID
|
| 69 |
+
|
| 70 |
+
SpatialVID is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC-BY-NC-SA-4.0). Users must attribute the original source, use the resource only for non-commercial purposes, and release any modified/derived works under the same license. For the full license text, visit https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.
|
| 71 |
+
|
| 72 |
+
## Citation
|
| 73 |
+
|
| 74 |
+
If you find this project useful for your research, please cite our paper.
|
| 75 |
+
|
| 76 |
+
```bibtex
|
| 77 |
+
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
|
| 81 |
|
| 82 |
|