InfinigenDefocus / README.md
venkatsubra's picture
Update README.md
f0fadd6 verified
---
license: bsd-3-clause
---
<p align="center">
The official implementation is available on
<a href="https://github.com/princeton-vl/FOSSA"><strong>GitHub</strong></a>.
</p>
<h1 align="center">Zero-Shot Depth from Defocus</h1>
<p align="center">
<a href="https://zuoym15.github.io/"><strong>Yiming Zuo*</strong></a>
·
<a href="https://hermera.github.io/"><strong>Hongyu Wen*</strong></a>
·
<a href="https://www.linkedin.com/in/venkat-subramanian5/"><strong>Venkat Subramanian*</strong></a>
·
<a href="https://patrickchen.me/"><strong>Patrick Chen</strong></a>
·
<a href="https://kkayan.com/"><strong>Karhan Kayan</strong></a>
·
<a href="http://mariobijelic.de/wordpress/"><strong>Mario Bijelic</strong></a>
·
<a href="https://www.cs.princeton.edu/~fheide/"><strong>Felix Heide</strong></a>
·
<a href="https://www.cs.princeton.edu/~jiadeng/"><strong>Jia Deng</strong></a>
</p>
<p align="center">
(*Equal Contribution)
</p>
<p align="center">
<a href="https://pvl.cs.princeton.edu/">Princeton Vision & Learning Lab (PVL)</a>
</p>
</p>
<h3 align="center"><a href="http://arxiv.org/abs/2603.26658">Paper</a> · </a><a href="https://zedd.cs.princeton.edu/">Project</a> </a></h3>
<p align="center">
<a href="TODO">
<img src="assets/teaser.png" alt="FOSSA Teaser" width="100%">
</a>
</p>
<hr>
<h3>Overview</h3>
<blockquote>
<p>
We build the Infinigen Defocus synthetic dataset on top of Infinigen Indoors [33]. Infinigen is a procedural system for generating photorealistic indoor scenes. Owing to its procedural nature, it can produce unlimited variation at both the object and scene levels, yielding diverse shapes, layouts, and spatial compositions.
</p>
<p>
Infinigen uses Blender [8] for scene composition and rendering. Blender provides native support for camera aperture and focus distance, and supports synthesizing defocus effects during ray tracing using a thin-lens camera model [15]. This makes Blender suitable for generating realistic focus stacks with physically accurate defocus blur.
</p>
<p>
We modify the Infinigen generation pipeline so that, for each scene, it renders multiple images from the same camera pose while varying the aperture size and focus distance. We choose the rendering settings to match the distribution covered by \benchmarkname. Specifically, we render images using 5 aperture settings (F1.4/2.0/2.8/4.0/5.6), 9 focus distances (0.8/1.2/1.7/2.3/3.0/3.8/4.7/6.0/8.0m), and one additional all-in-focus image, resulting in <strong>5 × 9 + 1 = 46</strong> images per scene.
</p>
<p>
We use the rendered depth map of the all-in-focus image as the ground-truth depth. In total, we generate 500 scenes and manually reject the scenes with degenerated object layout or suboptimal camera placement, resulting in 200 scenes with the highest visual quality.
</p>
<br><br>
<a href="https://arxiv.org/abs/2603.26658"><strong>Paper (arXiv)</strong></a>
</blockquote>
<p align="center">
<a href="https://arxiv.org/abs/2603.26658">
<img src="assets/Infinigen-Defocus.png" alt="Infinigen Defocus Teaser" width="100%">
</a>
</p>
<hr>