Request access to D-RE10K (Research-Only)
D-RE10K contains processed real-estate walkthrough video clips derived from third-party sources. Access is granted for non-commercial research only. We do not grant rights to any underlying third-party content. You are responsible for ensuring you have the necessary rights to use the media. By requesting access, you agree to use this dataset for non-commercial research purposes only.
Log in or Sign Up to review the conditions and access this dataset content.
D-RE10K: Dynamic Real-Estate 10K Dataset
Overview
This dataset contains the DRE10K training split (15,467 clips, 147,422 frames) and the DRE10K mask test split (76 clips, 1,541 frames), released on Hugging Face for research on self-supervised large view synthesis in dynamic environments. The data is collected from real-estate walkthrough videos and curated specifically for training and evaluating novel view synthesis models in scenes with dynamic objects.
Our dataset builds on the Real-Estate 10K collection and extends it with per-frame binary masks, masked videos, COLMAP reconstructions, and DPVO camera trajectories for the test split. Each clip is accompanied by JSON metadata containing camera intrinsics and world-to-camera poses, making it a versatile resource for tasks such as novel view synthesis, camera pose estimation, and dynamic scene understanding.
For more details, please refer to our paper WildRayZer: Self-supervised Large View Synthesis in Dynamic Environments.
| Split | Clips | Extracted Frames | Metadata (JSON) | Binary Masks | Masked Videos | COLMAP | DPVO |
|---|---|---|---|---|---|---|---|
| Train | 15,467 | 147,422 | 15,467 | — | — | — | — |
| Test | 76 | 1,541 | 76 | 1,540 | 76 | 76 scenes | 76 |
Key Features
- Size: 15,467 training clips with 147,422 extracted frames; 76 test clips with 1,541 frames.
- Representation: Extracted PNG frames from real-estate walkthrough videos, with per-clip JSON metadata (camera intrinsics, world-to-camera poses, frame paths).
- Train split includes:
- Video clips (
.mp4) - Extracted frames (
.png) - Per-clip JSON metadata
- Video clips (
- Test split additionally includes:
- Per-frame binary masks (
.png) for dynamic objects - Masked videos with dynamic objects removed (
.mp4) - COLMAP reconstructions (sparse models in binary & text, masks, database)
- DPVO estimated camera trajectories (
.txt)
- Per-frame binary masks (
Dataset Format
The dataset is provided in a format ready for view-synthesis and 3D-reconstruction research:
- Videos: Stored as
.mp4files undervideos/. - Frames: Stored as
.pngfiles underimages/<clip_id>/. - Metadata: Stored as
.jsonfiles undermetadata/. Each JSON file contains camera intrinsics (fxfycxcy), 4×4 world-to-camera matrices (w2c), and frame paths. - Binary Masks (test only): Stored as
.pngfiles underbinary_masks/<clip_id>/. - COLMAP (test only): Full sparse reconstructions under
colmap/<clip_id>/(includessparse/,masks/,database.db). - DPVO (test only): Camera trajectory files under
dpvo/<clip_id>.txt.
The dataset is distributed as multi-part zip archives. After downloading, unzip them as follows:
# Unzip training data (8 parts)
mkdir -p train
for f in train_zip/train_*.zip; do
unzip -o "$f" -d .
done
# Unzip test data (3 parts)
mkdir -p test
for f in test_zip/test_*.zip; do
unzip -o "$f" -d .
done
After unzipping, you should see the train/ and test/ directories with the structure described above.
License
This dataset is released for non-commercial research use only. The video clips and frames are derived from third-party sources. We do not hold the copyright to the underlying audio-visual content. Users must agree to the terms outlined in the LICENSE file, which include:
- Use for non-commercial research only.
- No redistribution of the dataset.
- Acknowledgment of third-party rights.
Takedown Policy
The video clips in this dataset are derived from third-party sources. If any clips need to be taken down (e.g., due to privacy concerns or copyright requests), we will promptly delete them from this dataset. Please contact us at xuweic@virginia.edu for such requests.
Citation
If you find this dataset useful in your research, please cite our work:
@article{chen2026wildrayzerselfsupervisedlargeview,
title={WildRayZer: Self-supervised Large View Synthesis in Dynamic Environments},
author={Xuweiyi Chen and Wentao Zhou and Zezhou Cheng},
year={2026},
eprint={2601.10716},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.10716},
}
- Downloads last month
- 13