nielsr's picture
nielsr HF Staff
Improve dataset card: Add image-to-3d task, update paper link, and enhance usage guide
11b7f0f verified
|
raw
history blame
8.73 kB
metadata
license: cc-by-4.0
size_categories:
  - n<1K
pretty_name: Display Inverse Rendering Dataset
tags:
  - computer-vision
  - inverse-rendering
  - photometric-stereo
  - computer-graphics
  - display
  - polarization
  - stereo
  - multi-light
  - illumination-multiplexing
task_categories:
  - image-to-3d
papers:
  - title: A Real-world Display Inverse Rendering Dataset
    url: https://huggingface.co/papers/2508.14411
homepage: https://michaelcsj.github.io/DIR/
repository: https://github.com/MichaelCSJ/DIR

Display Inverse Rendering Dataset

Introduction

This dataset is created for display inverse rendering, including multi-light stereo images captured by polarization cameras, and GT geometry (pixel-aligned point cloud and surface normals) scanned by high-precision 3D scanner.

DIR dataset is a dataset for Display Inverse Rendering (DIR). It contains assets captured from an LCD & polarization-camera system.

  • OLAT Images: are captured under display superpixels, and can be used to simulate arbitrary display patterns.
  • GT Geometry: is scanned with a high-precision 3D scanner.
  • Lighting information: We carefully calibrated light direction, non-linearity, and backlight.
  • Stereo Imaging: is an optional feature to initialize rough geometry.

Why Display Inverse Rendering? Display inverse rendering uses a monitor as a per-pixel, programmable light source to reconstruct object geometry and reflectance from captured images. Key features include:

  • Illumination Multiplexing: encodes multiple lights and reduces demanded a number of inputs.
  • Leveraging Polarization: enables diffuse-specular separation based on optics.

Structure

  • DIR-basic: The basic version of the dataset released with the paper. It includes stereo polarized RAW images, RGB images from a reference view, and ground-truth surface normals and point clouds. All images are captured under a multi-light configuration projected through 16Γ—9 superpixels on the display.
  β”œβ”€β”€ A
  β”‚  β”œβ”€β”€GT_geometry (for reference(main) view)
  β”‚  β”‚  β”œβ”€β”€'normal.npy',
  β”‚  β”‚  β”œβ”€β”€'normal.png',
  β”‚  β”‚  β”œβ”€β”€'point_cloud_gt.npy'
  β”‚  β”œβ”€β”€main
  β”‚  β”‚  β”œβ”€β”€diffuseNspecular
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white.png',
  β”‚  β”‚  β”œβ”€β”€RAW_polar
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143_[SHUTTER_TIME].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black_[SHUTTER_TIME].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white_[SHUTTER_TIME].png',
  β”‚  β”œβ”€β”€side
  β”‚  β”‚  β”œβ”€β”€diffuseNspecular
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white.png',
  β”‚  β”‚  β”œβ”€β”€RAW_polar
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143_[SHUTTER_TIME].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black_[SHUTTER_TIME].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white_[SHUTTER_TIME].png',
  β”‚  β”œβ”€β”€mask.png
  β”‚  β”œβ”€β”€point_cloud.npy (unprojected pixel w.r.t. depth & focal length)
  • DIR-pms: This dataset follows the DiLiGeNT format and has the same composition as DIR-basic. It provides multi-light RGB images from the reference view along with related information and the ground-truth normal maps.
  β”œβ”€β”€ A [Suffix (default "PNG")]
  β”‚  β”œβ”€β”€'000 - 143.png',
  β”‚  β”œβ”€β”€'filenames.txt',
  β”‚  β”œβ”€β”€'light_directions.txt'
  β”‚  β”œβ”€β”€'light_intensities.txt',
  β”‚  β”œβ”€β”€'mask.png'
  β”‚  β”œβ”€β”€'Normal_gt.mat'

Getting Started

βš™οΈ Installation

git clone https://github.com/MichaelCSJ/DIR.git
cd DIR
conda env create -f environment.yml
conda activate DIR

πŸ—‚οΈ Dataset Preparation

Download the DIR dataset for perform our display inverse rendering baseline. It consists of 16 real-world objects with diverse shapes and materials under precisely calibrated directional lighting. There are some versions of dataset as 'DIR-basic', 'DIR-pms', 'DIR-hdr', and 'DIR-multi-distance'.

  • DIR-basic: The basic version of the dataset released with the paper. It includes stereo polarized RAW images, RGB images from a reference view, and ground-truth surface normals and point clouds. All images are captured under a multi-light configuration projected through 16Γ—9 superpixels on the display.
  β”œβ”€β”€ A
  β”‚  β”œβ”€β”€GT_geometry (for reference(main) view)
  β”‚  β”‚  β”œβ”€β”€'normal.npy',
  β”‚  β”‚  β”œβ”€β”€'normal.png',
  β”‚  β”‚  β”œβ”€β”€'point_cloud_gt.npy'
  β”‚  β”œβ”€β”€main
  β”‚  β”‚  β”œβ”€β”€diffuseNspecular
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white.png',
  β”‚  β”‚  β”œβ”€β”€RAW_polar
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143_[SHUTTER_TIME(us)].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black_[SHUTTER_TIME(us)].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white_[SHUTTER_TIME(us)].png',
  β”‚  β”œβ”€β”€side
  β”‚  β”‚  β”œβ”€β”€diffuseNspecular
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black.png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white.png',
  β”‚  β”‚  β”œβ”€β”€RAW_polar
  β”‚  β”‚  β”‚  β”œβ”€β”€'000 - 143_[SHUTTER_TIME(us)].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'black_[SHUTTER_TIME(us)].png',
  β”‚  β”‚  β”‚  β”œβ”€β”€'white_[SHUTTER_TIME(us)].png',
  β”‚  β”œβ”€β”€'mask.png'
  β”‚  β”œβ”€β”€'point_cloud.npy' (unprojected pixel w.r.t. depth & focal length)
  • DIR-pms: This dataset follows the DiLiGenT format and has the same composition as DIR-basic. It provides multi-light RGB images from the reference view along with related information and the ground-truth normal maps.
  β”œβ”€β”€ A [Suffix (default "PNG")]
  β”‚  β”œβ”€β”€'000 - 143.png',
  β”‚  β”œβ”€β”€'filenames.txt',
  β”‚  β”œβ”€β”€'light_directions.txt'
  β”‚  β”œβ”€β”€'light_intensities.txt',
  β”‚  β”œβ”€β”€'mask.png'
  β”‚  β”œβ”€β”€'Normal_gt.mat'
  • DIR-hdr: TBD.
  • DIR-multi-distance: TBD.

After downloading, place them under data/ as the following directory tree.

πŸ”₯ Normal and basis BRDF Recovery

To run the baseline, execute train.py with the following command:

python train.py --name YOUR_SESSION_NAME --dataset_root YOUR_DATASET_PATH

By default, this code performs inverse rendering using multi-light images captured with an OLAT pattern. If you want to use a small number of multi-light images with a multiplexed display pattern, run the code as follows:

python train.py --name YOUR_SESSION_NAME --dataset_root YOUR_DATASET_PATH --use_multiplexing True --initial_light_pattern YOUR_DISPLAY_PATTERNS

You can use display patterns provided by DDPS for YOUR_DISPLAY_PATTERNS. Place display patterns under patterns/ as the following directory tree.

Lighting Patterns (Initial):

Lighting Patterns (Learned):

Once training is completed, a folder named YYYYMMDD_HHMMSS will be created inside the /results/SESSION directory, containing the TensorBoard logs, OLAT rendering results, and the fitted parameters for each object.

πŸ–ΌοΈ Novel Relighting (Optional)

Run relighting.py to render images under novel directional lightings based on recovered normal map and BRDF parameter maps. To output .avi video:

python relighting.py --datadir ./results/YOUR_SESSION_NAME/OBJECT_NAME --format avi

Citation

If you find this repository useful, please consider citing this paper:

@inproceedings{choi2025realworld,
      title={A Real-world Display Inverse Rendering Dataset},
      author={Seokjun Choi and Hoon-Gyu Chung and Yujin Jeon and Giljoo Nam and Seung-Hwan Baek},
      booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
      year={2025},
      url={https://huggingface.co/papers/2508.14411}
}

TODO

  • Release training code.
  • Release Display Inverse Rendering (DIR) dataset.
  • Release EXPANDED version of DIR datset (HDR).
  • Release EXPANDED version of DIR datset (multi-distance).
  • Release additional visualization tools.
  • Release raw image processing code.