TransPhy3D / README.md
Daniellesry's picture
Add metadata, paper link, and citation to TransPhy3D (#2)
3b023eb verified
metadata
license: apache-2.0
task_categories:
  - depth-estimation
tags:
  - transparency
  - video-depth-estimation
  - computer-vision

TransPhy3D

Project Page | Paper | Code

TransPhy3D is a synthetic video corpus of transparent and reflective scenes, consisting of 11k sequences rendered with Blender/Cycles. It provides high-quality RGB frames along with physically based depth and normal labels. The dataset was introduced in the paper "Diffusion Knows Transparency: Repurposing Video Diffusion for Transparent Object Depth and Normal Estimation".

Introduction

This dataset aims to provide the first transparent-object oriented video dataset with perfect depth and normal labels, and diverse categories and shapes. Scenes are assembled from a curated bank of category-rich static assets and shape-rich procedural assets paired with glass/plastic/metal materials.

Quick Start

The dataset repository includes a demo script to load and visualize the data:

python load_demo.py --data_path test/0826_0006_materials.000000.tar --output outputs

The results will be saved in the outputs/ directory as follows:

outputs/
|-- output_depth.mp4
|-- output_normal.mp4
`-- output_rgb.mp4

Data Structure

The dataset is organized as follows:

|-- parametric_train #* the shape-rich dataset
    |-- test
        |-- 1_materials.000000.tar
        |-- ...
    |-- training
    `-- validation
|-- test #* TransPhy3D-Test
`-- train #* the category-rich dataset

Citation

If you use this dataset in your research, please cite the following paper:

@article{dkt2025,
  title   = {Diffusion Knows Transparency: Repurposing Video Diffusion for Transparent Object Depth and Normal Estimation},
  author  = {Shaocong Xu and Songlin Wei and Qizhe Wei and Zheng Geng and Hong Li and Licheng Shen and Qianpu Sun and Shu Han and Bin Ma and Bohan Li and Chongjie Ye and Yuhang Zheng and Nan Wang and Saining Zhang and Hao Zhao},
  journal = {https://arxiv.org/abs/2512.23705},
  year    = {2025}
}