image
imagewidth (px) 640
640
| label
class label 2
classes |
|---|---|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
0depth_vis
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
|
1rgb
|
StaDy4D Dataset
StaDy4D (Static vs Dynamic 4D) is a CARLA 0.9.16 dataset that pairs static environments with their dynamic counterparts. Every camera sweep captures (1) the empty map and (2) the same trajectory after populating the world with traffic. Each frame is accompanied by metrically accurate RGB-D data, camera poses, and ready-to-use MP4s, making the dataset useful for scene understanding, 4D reconstruction, and generative modeling research.
At a Glance
- Maps: 12 CARLA towns (Town01βTown07, Town10HD, Town11βTown13, Town15)
- Sequences: 20 videos per map β 240 trajectories, each recorded twice (static & dynamic)
- Frames: 15β―s clips at 10β―FPS (200 frames) per scene β 96β―000 frame pairs total
- Camera behaviors: 6 realistic trajectories (dashcam, drone, rooftop orbit, crossroad, CCTV, pedestrian) automatically cycled through the dataset
- Modalities: RGB, depth, depth visualization, per-frame intrinsics/extrinsics, video-level intrinsics, metadata, and MP4s for RGB/depth in both scenes
- Dynamic actors: 80 autopilot vehicles + 50 AI pedestrians per dynamic capture; static captures keep only the background geometry
Getting the Data
1. Quick Sample (ships with this repo)
- Path:
StaDy4D/sample - Content: Town04/video_{00,01} and Town05/video_{00,01} (β50 frames per static/dynamic scene)
- Purpose: Sanity checks, unit tests, and quick visualizations without downloading the full release.
2. Full Release on Hugging Face
The entire StaDy4D dataset is hosted as a dataset repository at huggingface.co/datasets/henry000/StaDy4D. Each Town is stored as folders with per-video subdirectories, identical to the tree documented below.
Option A β Git LFS clone (best for downloading everything):
sudo apt install git-lfs # or brew install git-lfs
git lfs install
git clone https://huggingface.co/datasets/henry000/StaDy4D StaDy4D_full
cd StaDy4D_full
# (Optional) Download only a subset to save time
git lfs pull --include "Town05/**"
Option B β Hugging Face CLI (resume-friendly, partial downloads):
pip install -U "huggingface_hub[cli]"
huggingface-cli login # optional if dataset is public
huggingface-cli download henry000/StaDy4D \
--repo-type dataset \
--local-dir StaDy4D_full \
--resume-download \
--local-dir-use-symlinks False
Option C β Python API (scripted access to individual files):
from huggingface_hub import snapshot_download
local_dir = snapshot_download(
repo_id="henry000/StaDy4D",
repo_type="dataset",
revision="main",
allow_patterns=["Town05/video_00/**"] # narrow down as needed
)
Once downloaded, set STADY4D_ROOT=/path/to/StaDy4D_full (or update paths in your code) and the folder structure below will line up exactly.
Folder Layout
StaDy4D/
βββ Town05/ # Full release (one folder per CARLA map)
β βββ video_00/ # Sequence (paired static/dynamic)
β βββ metadata.json # Sequence-level metadata (fps, trajectory, weatherβ¦)
β βββ intrinsic.json # Camera intrinsics shared by both scenes
β βββ static/ # Scene with only map geometry
β β βββ rgb/rgb_0000.png
β β βββ depth/depth_0000.npy
β β βββ depth_vis/depth_vis_0000.png
β β βββ extrinsics/extrinsic_0000.npy
β β βββ intrinsics/intrinsic_0000.npy
β βββ dynamic/ # Same trajectory with vehicles + walkers
β β βββ ...
β βββ static_rgb.mp4 # ffmpeg-compressed RGB video
β βββ static_depth.mp4 # Visualization (50β―m normalization)
β βββ dynamic_rgb.mp4
β βββ dynamic_depth.mp4
βββ sample/ # Lightweight subset shipped with this repo
βββ README.md # You are here
The sample/ directory mirrors the exact structure of the full dataset while keeping only two short (β50-frame) sequences from Town04 and Town05 for smoke testing.
Modalities & Naming
| Data | Path pattern | Format & units | Notes |
|---|---|---|---|
| RGB frames | TownXX/video_YY/{static|dynamic}/rgb/rgb_XXXX.png |
640Γ360 PNG (uint8, BGR order) | Per-frame RGB ready for OpenCV. |
| Depth maps | .../depth/depth_XXXX.npy |
float32 NumPy array, meters (0β1000β―m) | Infinity is clamped to 1000β―m. |
| Depth visualization | .../depth_vis/depth_vis_XXXX.png |
640Γ360 PNG (uint8) | Depth mapped to [0,255] using a 50β―m window for qualitative viewing. |
| Camera extrinsics | .../extrinsics/extrinsic_XXXX.npy |
4Γ4 float64 matrix | Camera-to-world (c2w) transform in CV convention (Xβright, Yβdown, Zβforward). |
| Camera intrinsics (per frame) | .../intrinsics/intrinsic_XXXX.npy |
3Γ3 float64 matrix | Derived from the frameβs FoV, identical across a sequence but stored for convenience. |
| Sequence intrinsics | TownXX/video_YY/intrinsic.json |
JSON | Contains fx,fy,cx,cy,width,height,fov_deg. |
| Metadata | TownXX/video_YY/metadata.json |
JSON | Captures map, trajectory type, fps, frame count, weather, and actor counts. |
| Videos | TownXX/video_YY/static_rgb.mp4, etc. |
MP4 (H.264, yuv420p) | Created with ffmpeg for fast previewing without decoding PNGs. |
Metadata Fields
metadata.json contains:
{
"map_name": "Town05",
"video_idx": 0,
"num_frames": 200,
"fps": 10,
"trajectory_type": "car_forward",
"resolution": {"width": 640, "height": 360},
"fov_deg": 70.0,
"n_vehicles": 80,
"n_walkers": 50,
"weather": "ClearNoon"
}
The same values apply to both static and dynamic halves of the sequence; only the actors present in the world differ.
Note: The bundled sample subset shortens each clip to 50 frames, so
num_framesthere will read50.
Camera Trajectories
Each video cycles through one of the following behaviors (specified in metadata.json β trajectory_type):
car_forwardβ Dashcam perspective at ~2.5β―m height, gentle steering, 0.8β―m per frame.drone_forwardβ Low-altitude drone shot (10β20β―m) gliding at ~0.6β―m per frame with mild drift.orbit_buildingβ 30β40β―m rooftop position that pans 120Β° without translating.orbit_crossroadβ Elevated node (3β5β―m) panning roughly 100Β° across an intersection.cctvβ Fully static camera on a high rooftop observing traffic.pedestrianβ Human-eye view (1.5β1.8β―m) strolling at 1.5β―m/s along sidewalks.
Setting "trajectory_types": ["mixed"] in the generator ensures the dataset cycles through these six options so that every town contains a balanced blend of vantage points.
CARLA Maps Included
| Map | Description |
|---|---|
| Town01 | Compact river town with bends and bridges. |
| Town02 | Residential blocks with storefronts and plazas. |
| Town03 | Dense downtown with skyscrapers and multilane roads. |
| Town04 | Small town stitched to a highway loop. |
| Town05 | Multi-level highway network with flyovers. |
| Town06 | Low-density suburban layout surrounded by forests. |
| Town07 | Rural lanes and tight turns through countryside. |
| Town10HD | Downtown HD map with wide boulevards. |
| Town11 | Industrial area filled with factories and depots. |
| Town12 | Rural residential region with farms. |
| Town13 | Modern roundabouts and mixed-use districts. |
| Town15 | Glass high-rises and futuristic downtown blocks. |
Working With the Data
import cv2
import json
import numpy as np
from pathlib import Path
base = Path("StaDy4D/sample/Town05/video_00")
# Load per-scene metadata/intrinsics
metadata = json.loads((base / "metadata.json").read_text())
intrinsics = json.loads((base / "intrinsic.json").read_text())
# Select static frame 12
frame_id = 12
static_dir = base / "static"
rgb = cv2.imread(str(static_dir / "rgb" / f"rgb_{frame_id:04d}.png"))
depth = np.load(static_dir / "depth" / f"depth_{frame_id:04d}.npy")
pose_c2w = np.load(static_dir / "extrinsics" / f"extrinsic_{frame_id:04d}.npy")
K = np.load(static_dir / "intrinsics" / f"intrinsic_{frame_id:04d}.npy")
# Example: unproject depth to camera coordinates
yy, xx = np.indices(depth.shape)
Z = depth
X = (xx - intrinsics["cx"]) * Z / intrinsics["fx"]
Y = (yy - intrinsics["cy"]) * Z / intrinsics["fy"]
Depth is already in meters. The extrinsic matrices convert camera-frame points to CARLA world coordinates via p_world = T_c2w @ p_camera_h.
Reproducing / Extending the Dataset
The dataset ships with the generator used to collect it. config_large_scale.json (root of this repository) captures the exact settings for StaDy4D:
{
"maps": ["Town01", "...", "Town15"],
"video_generation": {
"videos_per_map": 20,
"video_duration_sec": 15,
"fps": 10,
"trajectory_types": ["mixed"]
},
"actors": {"n_vehicles": 80, "n_walkers": 50},
"camera": {"width": 640, "height": 360, "fov": 70},
"weather": "ClearNoon"
}
Run python data_generate_carla.py --config config_large_scale.json with CARLAβ―0.9.16 to regenerate every sequence (or tweak the JSON to form new subsets). The same script also writes progress.json, so long runs can safely resume if interrupted.
License & Citation
StaDy4D inherits the CARLA simulator license for rendered content. Please attribute both CARLA and this dataset when using it in academic or commercial work. Citation details will be added as soon as the accompanying paper/preprint is released.
For issues or feature requests, open an issue in this repository or email the maintainers listed in the root README.md.
- Downloads last month
- 1,391