Add dataset card for SparseCam4D

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +71 -3
README.md CHANGED
@@ -1,3 +1,71 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-to-3d
5
+ ---
6
+
7
+ # SparseCam4D: Spatio-Temporally Consistent 4D Reconstruction from Sparse Cameras
8
+
9
+ This repository contains the demo dataset for **SparseCam4D**, a framework for high-quality 4D reconstruction from sparse and uncalibrated camera inputs.
10
+
11
+ [**Project page**](https://inspatio.github.io/sparse-cam4d/) | [**Paper**](https://arxiv.org/abs/2603.26481) | [**GitHub**](https://github.com/inspatio/sparse-cam4d)
12
+
13
+ ## Data Layout
14
+
15
+ The expected data layout for the dataset is as follows:
16
+
17
+ ```
18
+ balloon1/
19
+ β”œβ”€β”€ depth/
20
+ β”‚ β”œβ”€β”€ cam01/ # per-frame depth maps for training camera cam01 (*.npy)
21
+ β”‚ β”œβ”€β”€ cam06/ # per-frame depth maps for training camera cam06 (*.npy)
22
+ β”‚ β”œβ”€β”€ cam10/ # per-frame depth maps for training camera cam10 (*.npy)
23
+ β”‚ β”œβ”€β”€ cam01.mp4 # depth video visualization
24
+ β”‚ β”œβ”€β”€ cam06.mp4
25
+ β”‚ └── cam10.mp4
26
+ β”œβ”€β”€ images/ # all input images, named as <cam>_<time>.png
27
+ β”œβ”€β”€ preprocess/
28
+ β”‚ β”œβ”€β”€ time_0000/
29
+ β”‚ β”‚ β”œβ”€β”€ diffusion/ # pseudo-view images generated by ViewCrafter at t=0
30
+ β”‚ β”‚ └── sparse/0/ # COLMAP sparse reconstruction at t=0 (cameras.bin, points3D.ply, ...)
31
+ β”‚ β”œβ”€β”€ time_0001/
32
+ β”‚ β”‚ └── diffusion/ # pseudo-view images at t=1
33
+ β”‚ └── ... # time_0002 ~ time_0099, each with diffusion/
34
+ β”œβ”€β”€ sfm_transforms_extend.json # camera intrinsics + extrinsics for all views and timestamps
35
+ β”œβ”€β”€ vc_roma_sfm_300.ply # initial point cloud (SfM + RoMa dense matching)
36
+ β”œβ”€β”€ transforms_train.json # camera poses for training split
37
+ └── transforms_test.json # camera poses for test split
38
+ ```
39
+
40
+ **Depth maps** are estimated by [Video Depth Anything](https://github.com/DepthAnything/Video-Depth-Anything) on the training-camera videos.
41
+
42
+ **Pseudo-view images** under `preprocess/time_*/diffusion/` are synthesized by [ViewCrafter](https://github.com/Drexubery/ViewCrafter) from training cameras to cover additional viewpoints at each timestamp with sparse camera poses estimated by [VGGT](https://github.com/facebookresearch/vggt).
43
+
44
+ ## Sample Usage
45
+
46
+ ### Training
47
+
48
+ To train the model on this dataset, edit the `source_path` and `model_path` fields in the config file, then run:
49
+
50
+ ```shell
51
+ python train.py --config configs/nvidia/balloon1.yaml
52
+ ```
53
+
54
+ ### Rendering and Evaluation
55
+
56
+ After training and performing pose alignment, you can render and evaluate using:
57
+
58
+ ```shell
59
+ python render.py --config configs/nvidia/balloon1.yaml --skip_train --iteration 30000
60
+ ```
61
+
62
+ ## Citation
63
+
64
+ ```bibtex
65
+ @article{pan2026sparsecam4d,
66
+ title={SparseCam4D: Spatio-Temporally Consistent 4D Reconstruction from Sparse Cameras},
67
+ author={Pan, Weihong and Zhang, Xiaoyu and Zhang, Zhuang and Ye, Zhichao and Wang, Nan and Liu, Haomin and Zhang, Guofeng},
68
+ journal={arXiv preprint arXiv:2603.26481},
69
+ year={2026}
70
+ }
71
+ ```