Datasets:
image image |
|---|
SynthCam 1K — Synthetic Camera Motion Dataset
A synthetic dataset of 3D scene renderings with full camera pose and intrinsics metadata, designed for training and evaluating models in camera pose estimation, NeRF, and 3D Gaussian Splatting pipelines.
What's inside
- 1 000 clips × 30 frames = 30 000 JPEG images (640×360, 16:9)
- Per-frame camera extrinsics — position (x, y, z) and rotation (Euler angles)
- Per-frame camera intrinsics — focal length (fx, fy), principal point (cx, cy), FOV, aspect ratio
- 3 background variants per clip — starfield, studio white, chroma green
- MP4 video per clip encoded at 30fps (libx264, CRF 20)
- Deterministic generation — every clip is fully reproducible from its
seed
Dataset structure
SynthCam-1K/
├── dataset.jsonl ← one record per frame (streamable, HF-ready)
├── dataset_info.json ← schema, stats, split info
├── README.md ← this file
├── clip_0000/
│ ├── metadata.json ← per-frame camera data for this clip
│ ├── video.mp4 ← 30fps clip
│ └── frames/
│ ├── frame_000.jpg
│ ├── frame_001.jpg
│ └── ...
├── clip_0001/
│ └── ...
└── ...
JSONL record schema
Each line in dataset.jsonl is a self-contained JSON object:
{
"clip_id": "clip_0042",
"frame_index": 14,
"image_path": "clip_0042/frames/frame_014.jpg",
"video_path": "clip_0042/video.mp4",
"seed": 42,
"speed": 0.73,
"camera_position": { "x": "8.1023", "y": "5.4211", "z": "-3.2901" },
"camera_rotation": { "x": "-0.4812", "y": "1.4327", "z": "0.0000" },
"intrinsics": {
"fov_deg": 60,
"aspect": 1.7778,
"near": 0.1,
"far": 1000,
"fx": 554.2563,
"fy": 554.2563,
"cx": 320.0,
"cy": 180.0,
"width": 640,
"height": 360
}
}
Camera model
Standard pinhole camera model. The intrinsics matrix K is:
K = | fx 0 cx |
| 0 fy cy |
| 0 0 1 |
With fx = fy = (width/2) / tan(fov/2) — square pixels, no skew.
How it was generated
Scenes are rendered in real-time using React Three Fiber (Three.js) inside a headless Chromium instance (Puppeteer). The camera orbits the scene on a circular path with added handheld noise. Each clip uses a unique integer seed that controls:
- Star field pattern
- Cube positions, scales, and material roughness
- Camera orbit speed
- Background type
Generation is fully deterministic — any clip can be reproduced exactly by re-running the recorder with its seed.
Scene contents:
- 15 PBR metallic cubes arranged pseudo-randomly
- 20 000-particle volumetric cloud (custom GLSL shader)
- Directional light with shadow mapping
- HDRI city environment for reflections
Intended use cases
- Camera pose estimation model training
- NeRF / 3D Gaussian Splatting pretraining and evaluation
- Synthetic-to-real transfer learning research
- Depth estimation (ground truth depth available on request)
- Visual odometry benchmarking
Splits
| Split | Clips | Frames |
|---|---|---|
| train | 900 | 27 000 |
| validation | 100 | 3 000 |
| total | 1 000 | 30 000 |
Splits are defined by clip index: clips 0000–0899 → train, 0900–0999 → validation.
Loading with 🤗 Datasets
from datasets import load_dataset
ds = load_dataset("Jmart7/SynthCam-1K", streaming=True)
for record in ds["train"]:
print(record["clip_id"], record["intrinsics"]["fx"])
License
This dataset is released under CC BY-NC 4.0. Free for research and non-commercial use with attribution. For commercial licensing, contact the author.
Citation
@dataset{synthcam1k_2026,
title = {SynthCam 1K: Synthetic Camera Motion Dataset},
author = {Jmart7},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Jmart7/SynthCam-1K}
}
- Downloads last month
- 3,142