Datasets:
metadata
license: cc-by-nc-4.0
task_categories:
- depth-estimation
tags:
- stereo-matching
- disparity
- stereo4d
- foundationstereo
pretty_name: FFS Stereo4D
size_categories:
- 100K<n<1M
FFS Stereo4D
Disparity maps for stereo matching, generated from the Stereo4D dataset using FoundationStereo.
Dataset Structure
data/train/
metadata.csv
0000000.zip (first 50,000 images)
0000001.zip (next 50,000 images)
...
0000025.zip
Each zip contains disparity PNG files named {vid_id}_frame_{frame_idx:06d}.png.
- Disparity images: 3-channel uint8 784×784 PNG files encoding per-pixel disparity. Decode with:
disp = (R * 255*255 + G * 255 + B) / 1000.0. See also: https://github.com/NVlabs/FoundationStereo/blob/master/scripts/vis_dataset.py - metadata.csv: Links each disparity image back to its source YouTube video, with a
zip_filecolumn indicating which zip contains the image.
Metadata Columns
| Column | Description |
|---|---|
file_name |
Disparity image filename (inside the zip) |
zip_file |
Which zip file contains this image |
vid_id |
Clip identifier (matches the .npz calibration file) |
frame_idx |
Frame index in the rectified stereo output |
youtube_video_id |
YouTube video ID of the source 360 video |
timestamp_us |
Timestamp in microseconds in the original video |
timestamp_sec |
Timestamp in seconds |
video_frame_index |
Estimated frame number in the original video |
fps |
FPS of the source video |
Retrieving Source RGB Frames
This dataset contains disparity maps only. Due to the copyrights of these videos, users need to download on your own behalf. The corresponding left/right RGB stereo pairs can be recovered by:
- Following stereo4d toolkit to download the YouTube video using
youtube_video_id. - Seek to
timestamp_sec(orvideo_frame_index) to locate the source frame. - Apply equirectangular rectification using the Stereo4D calibration
.npzfiles to obtain the left and right perspective images.
Generation Pipeline
- Source: YouTube 360 videos from the Stereo4D dataset.
- Rectification: Equirectangular frames are rectified and cropped to 1024×1024 perspective stereo pairs.
- Disparity estimation: FoundationStereo computes dense disparity at 784×784 resolution (resized by
scale=0.765625of the 1024×1024 input).
Camera Parameters
The rectified stereo pairs are generated at 1024×1024 with the following pinhole camera model:
| Parameter | Value (1024×1024 rectified) | Value (784×784 disparity) | Formula |
|---|---|---|---|
| HFOV | 60° | 60° | output_hfov in batch_rectify.py |
| Baseline | 0.063 m | 0.063 m | Assumed interpupillary distance for VR180 cameras |
| fx, fy | 886.8 px | 678.8 px | size * 0.5 / tan(0.5 * HFOV * pi/180) |
| cx, cy | 512 px | 392 px | Image center |
Depth is derived as: depth = fx * baseline / disparity.
Since disparity is computed at 784×784 resolution (scale factor 784/1024 = 0.765625 of the 1024×1024 input), use the 784×784 camera parameters when converting disparity to depth:
import numpy as np
hfov = 60 # degrees
baseline = 0.063 # meters
imw = 784
fx = imw * 0.5 / np.tan(0.5 * np.radians(hfov)) # 678.8 px
depth = fx * baseline / disparity
Citation
If you use this dataset, please consider cite:
@article{wen2026fastfoundationstereo,
title={Fast-FoundationStereo: Real-Time Zero-Shot Stereo Matching},
author={Bowen Wen and Shaurya Dewan and Stan Birchfield},
journal={CVPR},
year={2026}
}
@article{wen2025foundationstereo,
title={FoundationStereo: Zero-Shot Stereo Matching},
author={Wen, Bowen and Trepte, Matthew and Aribido, Joseph and Kautz, Jan and Birchfield, Stan and Wan, Yao},
journal={CVPR},
year={2025}
}
@inproceedings{jin2025stereo4d,
title={{Stereo4D: Learning How Things Move in 3D from Internet Stereo Videos}},
author={Jin, Linyi and Tucker, Richard and Li, Zhengqi and Fouhey, David and Snavely, Noah and Holynski, Aleksander},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025},
}