Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Hypersim Episode Pairs
WebDataset shards of frame pairs from Apple Hypersim (CC BY-SA 3.0) for geometry-aware training. Compatible with create_geometry_loader() from kvray-distill.
Dataset Summary
| Split | Shards | Pairs | Scenes |
|---|---|---|---|
| Train | 72 | 88,354 | 208 |
| Val | 10 | 7,255 | 18 |
| Total | 82 | 95,609 | 226 |
- Image size: 1024x768
- Frame gaps: 1, 2, 4, 8 (wider gaps subsampled to 50% of gap=1 count)
- Split provenance:
metadata_images_split_scene_v1.csvfrom apple/ml-hypersim
Shard Schema
Each sample in a shard contains:
| File | Description |
|---|---|
key.rgb0.jpg |
Frame 0 RGB (JPEG quality 85, Reinhard tone-mapped from HDR) |
key.rgb1.jpg |
Frame 1 RGB |
key.depth0.npz |
Frame 0 depth (float16, compressed) |
key.depth1.npz |
Frame 1 depth (float16, compressed) |
key.seg0.npz |
Frame 0 segmentation (uint16, NYU-40) — when available |
key.seg1.npz |
Frame 1 segmentation (uint16, NYU-40) — when available |
key.meta.json |
Poses, intrinsics, gap, metadata |
meta.json Fields
{
"dataset": "hypersim",
"depth_type": "ray_distance",
"pose_convention": "T_w_c",
"scene": "ai_001_001",
"camera": "cam_00",
"frame_ids": ["ai_001_001_cam_00_0010", "ai_001_001_cam_00_0011"],
"gap": 1,
"poses": [[...16 floats (4x4 row-major)...], [...]],
"intrinsics": [fx, fy, cx, cy],
"image_size": [768, 1024],
"has_flow": false,
"has_seg": true,
"meters_per_asset_unit": 0.0254
}
Depth Convention
IMPORTANT: Depth values are ray distance (Euclidean distance from the optical center), NOT projective z-depth.
To convert to projective z-depth for standard pinhole geometry:
dx = (u - cx) / fx
dy = (v - cy) / fy
cos_angle = 1.0 / np.sqrt(1.0 + dx**2 + dy**2)
z_depth = ray_distance * cos_angle
The create_geometry_loader() in kvray-distill handles this automatically when depth_type == "ray_distance" in meta.json.
Pose Convention
Poses are T_w_c (camera-to-world) as 4x4 homogeneous matrices in row-major order. Positions are in meters (converted via per-scene meters_per_asset_unit).
Intrinsics
Derived from M_cam_from_uv in metadata_camera_parameters.csv:
fx = W / (2 * M_cam_from_uv_00)≈ 886.8 pxfy = H / (2 * M_cam_from_uv_11)≈ 886.8 pxcx = W / 2 = 512,cy = H / 2 = 384
All Hypersim scenes have settings_camera_fov = pi/3 (60 deg HFOV).
Excluded Scenes
207 tilt-shift scenes excluded where the physical camera model introduces lens shift that a simple pinhole intrinsics model cannot represent (off-diagonal M_cam_from_uv entries > 0.01, corresponding to >5px principal point error). See audit.json for the full exclusion list.
4 pairs with severely broken rotation matrices (orthogonality error > 100) were removed post-build. See audit.json for details.
Quality Audit
Full audit results in audit.json. Summary:
- Tilt-shift contamination: CLEAN
- Depth convention: PASS
- Pose convention: PASS
- Intrinsics range: PASS
- Depth plausibility: PASS
- 78 minor pose orthogonality warnings (floating-point noise, < 0.1% of pairs)
License & Attribution
This dataset is derived from Hypersim by Apple Inc., licensed under CC BY-SA 3.0.
Mike Roberts and Nathan Paczan. "Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding." International Conference on Computer Vision (ICCV) 2021.
- Downloads last month
- 32