Add dataset card for SparseCam4D
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,3 +1,71 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-3d
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# SparseCam4D: Spatio-Temporally Consistent 4D Reconstruction from Sparse Cameras
|
| 8 |
+
|
| 9 |
+
This repository contains the demo dataset for **SparseCam4D**, a framework for high-quality 4D reconstruction from sparse and uncalibrated camera inputs.
|
| 10 |
+
|
| 11 |
+
[**Project page**](https://inspatio.github.io/sparse-cam4d/) | [**Paper**](https://arxiv.org/abs/2603.26481) | [**GitHub**](https://github.com/inspatio/sparse-cam4d)
|
| 12 |
+
|
| 13 |
+
## Data Layout
|
| 14 |
+
|
| 15 |
+
The expected data layout for the dataset is as follows:
|
| 16 |
+
|
| 17 |
+
```
|
| 18 |
+
balloon1/
|
| 19 |
+
βββ depth/
|
| 20 |
+
β βββ cam01/ # per-frame depth maps for training camera cam01 (*.npy)
|
| 21 |
+
β βββ cam06/ # per-frame depth maps for training camera cam06 (*.npy)
|
| 22 |
+
β βββ cam10/ # per-frame depth maps for training camera cam10 (*.npy)
|
| 23 |
+
β βββ cam01.mp4 # depth video visualization
|
| 24 |
+
β βββ cam06.mp4
|
| 25 |
+
β βββ cam10.mp4
|
| 26 |
+
βββ images/ # all input images, named as <cam>_<time>.png
|
| 27 |
+
βββ preprocess/
|
| 28 |
+
β βββ time_0000/
|
| 29 |
+
β β βββ diffusion/ # pseudo-view images generated by ViewCrafter at t=0
|
| 30 |
+
β β βββ sparse/0/ # COLMAP sparse reconstruction at t=0 (cameras.bin, points3D.ply, ...)
|
| 31 |
+
β βββ time_0001/
|
| 32 |
+
β β βββ diffusion/ # pseudo-view images at t=1
|
| 33 |
+
β βββ ... # time_0002 ~ time_0099, each with diffusion/
|
| 34 |
+
βββ sfm_transforms_extend.json # camera intrinsics + extrinsics for all views and timestamps
|
| 35 |
+
βββ vc_roma_sfm_300.ply # initial point cloud (SfM + RoMa dense matching)
|
| 36 |
+
βββ transforms_train.json # camera poses for training split
|
| 37 |
+
βββ transforms_test.json # camera poses for test split
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
**Depth maps** are estimated by [Video Depth Anything](https://github.com/DepthAnything/Video-Depth-Anything) on the training-camera videos.
|
| 41 |
+
|
| 42 |
+
**Pseudo-view images** under `preprocess/time_*/diffusion/` are synthesized by [ViewCrafter](https://github.com/Drexubery/ViewCrafter) from training cameras to cover additional viewpoints at each timestamp with sparse camera poses estimated by [VGGT](https://github.com/facebookresearch/vggt).
|
| 43 |
+
|
| 44 |
+
## Sample Usage
|
| 45 |
+
|
| 46 |
+
### Training
|
| 47 |
+
|
| 48 |
+
To train the model on this dataset, edit the `source_path` and `model_path` fields in the config file, then run:
|
| 49 |
+
|
| 50 |
+
```shell
|
| 51 |
+
python train.py --config configs/nvidia/balloon1.yaml
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
### Rendering and Evaluation
|
| 55 |
+
|
| 56 |
+
After training and performing pose alignment, you can render and evaluate using:
|
| 57 |
+
|
| 58 |
+
```shell
|
| 59 |
+
python render.py --config configs/nvidia/balloon1.yaml --skip_train --iteration 30000
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## Citation
|
| 63 |
+
|
| 64 |
+
```bibtex
|
| 65 |
+
@article{pan2026sparsecam4d,
|
| 66 |
+
title={SparseCam4D: Spatio-Temporally Consistent 4D Reconstruction from Sparse Cameras},
|
| 67 |
+
author={Pan, Weihong and Zhang, Xiaoyu and Zhang, Zhuang and Ye, Zhichao and Wang, Nan and Liu, Haomin and Zhang, Guofeng},
|
| 68 |
+
journal={arXiv preprint arXiv:2603.26481},
|
| 69 |
+
year={2026}
|
| 70 |
+
}
|
| 71 |
+
```
|