license: cc-by-4.0
size_categories:
- n<1K
pretty_name: Display Inverse Rendering Dataset
tags:
- computer-vision
- inverse-rendering
- photometric-stereo
- computer-graphics
- display
- polarization
- stereo
- multi-light
- illumination-multiplexing
task_categories:
- image-to-3d
papers:
- title: A Real-world Display Inverse Rendering Dataset
url: https://huggingface.co/papers/2508.14411
homepage: https://michaelcsj.github.io/DIR/
repository: https://github.com/MichaelCSJ/DIR
Display Inverse Rendering Dataset
- π Paper
- π Project Page
- π» GitHub Repository
Introduction
This dataset is created for display inverse rendering, including multi-light stereo images captured by polarization cameras, and GT geometry (pixel-aligned point cloud and surface normals) scanned by high-precision 3D scanner.
DIR dataset is a dataset for Display Inverse Rendering (DIR). It contains assets captured from an LCD & polarization-camera system.
- OLAT Images: are captured under display superpixels, and can be used to simulate arbitrary display patterns.
- GT Geometry: is scanned with a high-precision 3D scanner.
- Lighting information: We carefully calibrated light direction, non-linearity, and backlight.
- Stereo Imaging: is an optional feature to initialize rough geometry.
Why Display Inverse Rendering? Display inverse rendering uses a monitor as a per-pixel, programmable light source to reconstruct object geometry and reflectance from captured images. Key features include:
- Illumination Multiplexing: encodes multiple lights and reduces demanded a number of inputs.
- Leveraging Polarization: enables diffuse-specular separation based on optics.
Structure
- DIR-basic: The basic version of the dataset released with the paper. It includes stereo polarized RAW images, RGB images from a reference view, and ground-truth surface normals and point clouds. All images are captured under a multi-light configuration projected through 16Γ9 superpixels on the display.
βββ A
β βββGT_geometry (for reference(main) view)
β β βββ'normal.npy',
β β βββ'normal.png',
β β βββ'point_cloud_gt.npy'
β βββmain
β β βββdiffuseNspecular
β β β βββ'000 - 143.png',
β β β βββ'black.png',
β β β βββ'white.png',
β β βββRAW_polar
β β β βββ'000 - 143_[SHUTTER_TIME].png',
β β β βββ'black_[SHUTTER_TIME].png',
β β β βββ'white_[SHUTTER_TIME].png',
β βββside
β β βββdiffuseNspecular
β β β βββ'000 - 143.png',
β β β βββ'black.png',
β β β βββ'white.png',
β β βββRAW_polar
β β β βββ'000 - 143_[SHUTTER_TIME].png',
β β β βββ'black_[SHUTTER_TIME].png',
β β β βββ'white_[SHUTTER_TIME].png',
β βββmask.png
β βββpoint_cloud.npy (unprojected pixel w.r.t. depth & focal length)
- DIR-pms: This dataset follows the DiLiGeNT format and has the same composition as DIR-basic. It provides multi-light RGB images from the reference view along with related information and the ground-truth normal maps.
βββ A [Suffix (default "PNG")]
β βββ'000 - 143.png',
β βββ'filenames.txt',
β βββ'light_directions.txt'
β βββ'light_intensities.txt',
β βββ'mask.png'
β βββ'Normal_gt.mat'
Getting Started
βοΈ Installation
git clone https://github.com/MichaelCSJ/DIR.git
cd DIR
conda env create -f environment.yml
conda activate DIR
ποΈ Dataset Preparation
Download the DIR dataset for perform our display inverse rendering baseline. It consists of 16 real-world objects with diverse shapes and materials under precisely calibrated directional lighting. There are some versions of dataset as 'DIR-basic', 'DIR-pms', 'DIR-hdr', and 'DIR-multi-distance'.
- DIR-basic: The basic version of the dataset released with the paper. It includes stereo polarized RAW images, RGB images from a reference view, and ground-truth surface normals and point clouds. All images are captured under a multi-light configuration projected through 16Γ9 superpixels on the display.
βββ A
β βββGT_geometry (for reference(main) view)
β β βββ'normal.npy',
β β βββ'normal.png',
β β βββ'point_cloud_gt.npy'
β βββmain
β β βββdiffuseNspecular
β β β βββ'000 - 143.png',
β β β βββ'black.png',
β β β βββ'white.png',
β β βββRAW_polar
β β β βββ'000 - 143_[SHUTTER_TIME(us)].png',
β β β βββ'black_[SHUTTER_TIME(us)].png',
β β β βββ'white_[SHUTTER_TIME(us)].png',
β βββside
β β βββdiffuseNspecular
β β β βββ'000 - 143.png',
β β β βββ'black.png',
β β β βββ'white.png',
β β βββRAW_polar
β β β βββ'000 - 143_[SHUTTER_TIME(us)].png',
β β β βββ'black_[SHUTTER_TIME(us)].png',
β β β βββ'white_[SHUTTER_TIME(us)].png',
β βββ'mask.png'
β βββ'point_cloud.npy' (unprojected pixel w.r.t. depth & focal length)
- DIR-pms: This dataset follows the DiLiGenT format and has the same composition as DIR-basic. It provides multi-light RGB images from the reference view along with related information and the ground-truth normal maps.
βββ A [Suffix (default "PNG")]
β βββ'000 - 143.png',
β βββ'filenames.txt',
β βββ'light_directions.txt'
β βββ'light_intensities.txt',
β βββ'mask.png'
β βββ'Normal_gt.mat'
- DIR-hdr: TBD.
- DIR-multi-distance: TBD.
After downloading, place them under data/ as the following directory tree.
π₯ Normal and basis BRDF Recovery
To run the baseline, execute train.py with the following command:
python train.py --name YOUR_SESSION_NAME --dataset_root YOUR_DATASET_PATH
By default, this code performs inverse rendering using multi-light images captured with an OLAT pattern. If you want to use a small number of multi-light images with a multiplexed display pattern, run the code as follows:
python train.py --name YOUR_SESSION_NAME --dataset_root YOUR_DATASET_PATH --use_multiplexing True --initial_light_pattern YOUR_DISPLAY_PATTERNS
You can use display patterns provided by DDPS for YOUR_DISPLAY_PATTERNS.
Place display patterns under patterns/ as the following directory tree.
Lighting Patterns (Initial):
Lighting Patterns (Learned):
Once training is completed, a folder named YYYYMMDD_HHMMSS will be created inside the /results/SESSION directory, containing the TensorBoard logs, OLAT rendering results, and the fitted parameters for each object.
πΌοΈ Novel Relighting (Optional)
Run relighting.py to render images under novel directional lightings based on recovered normal map and BRDF parameter maps.
To output .avi video:
python relighting.py --datadir ./results/YOUR_SESSION_NAME/OBJECT_NAME --format avi
Citation
If you find this repository useful, please consider citing this paper:
@inproceedings{choi2025realworld,
title={A Real-world Display Inverse Rendering Dataset},
author={Seokjun Choi and Hoon-Gyu Chung and Yujin Jeon and Giljoo Nam and Seung-Hwan Baek},
booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2025},
url={https://huggingface.co/papers/2508.14411}
}
TODO
-
Release training code. -
ReleaseDisplay Inverse Rendering (DIR)dataset. - Release EXPANDED version of DIR datset (HDR).
- Release EXPANDED version of DIR datset (multi-distance).
- Release additional visualization tools.
- Release raw image processing code.