license: cc-by-4.0
task_categories:
- depth-estimation
The official implementation is available on GitHub.
Zero-Shot Depth from Defocus
Yiming Zuo* · Hongyu Wen* · Venkat Subramanian* · Patrick Chen · Karhan Kayan · Mario Bijelic · Felix Heide · Jia Deng
(*Equal Contribution)
Princeton Vision & Learning Lab (PVL)
Paper · Project
Overview
We captured 100 focus stacks in 100 unique scenes, covering various indoor and outdoor locations, such as classrooms, hallways, robotics labs, offices, kitchens, and gardens, providing a diverse scene coverage.
For each focus stack, we capture images at 9 focus distances, ranging from 0.82 to 8.10m. We capture at 5 larger apertures (F1.4/2.0/2.8/4.0/5.6), and a small aperture (F16) for all-in-focus images, resulting in 6 x 9=54 images in total for each scene. This rich combination of focus distances and apertures allows us to study the sensitivity of the models' performance to each factor.
We provide a dense ground-truth depth map for each scene under the resolution of 1824 x 1216, captured with a high-accuracy Lidar.
Data Structure
ZEDD contains 100 scenes divided into validation and test sets. For each scene, the data is organized as follows:
ZEDD/
├── test/
│ ├── test_0001/
│ │ ├── focus_stack/
│ │ │ ├── img_run_1_motor_6D3E_aperture_F1.4.jpg
│ │ │ ├── img_run_1_motor_6D3E_aperture_F2.0.jpg
│ │ │ └── ...
│ │ └── gt/
│ │ └── K.txt
│ └── ...
└── val/
├── val_0001/
│ ├── focus_stack/
│ │ ├── img_run_1_motor_6D3E_aperture_F1.4.jpg
│ │ ├── img_run_1_motor_6D3E_aperture_F2.0.jpg
│ │ └── ...
│ └── gt/
│ ├── depth_vis.jpg
│ ├── depth.npy
│ ├── K.txt
│ └── overlay.jpg
└── ...
Citation
@article{ZeroShotDepthFromDefocus,
author = {Zuo, Yiming and Wen, Hongyu and Subramanian, Venkat and Chen, Patrick and Kayan, Karhan and Bijelic, Mario and Heide, Felix and Deng, Jia},
title = {Zero-Shot Depth from Defocus},
journal = {arXiv preprint arXiv:2603.26658},
year = {2026},
url = {https://arxiv.org/abs/2603.26658}
}