RealX3D / README.md
ToferFish's picture
Update README.md
1e299b9 verified
---
license: mit
task_categories:
- image-to-3d
- depth-estimation
- image-to-image
tags:
- 3d-reconstruction
- multi-view
- nerf
- 3d-gaussian-splatting
- novel-view-synthesis
- benchmark
- colmap
- point-cloud
- depth-map
- raw-image
- computational-photography
pretty_name: "RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction"
size_categories:
- 1K<n<10K
---
<div align="center">
# RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction
[![Project Page](https://img.shields.io/badge/🌐_Project_Page-RealX3D-blue?style=for-the-badge)](https://i2wm.github.io/3DRR_2026/)
[![GitHub](https://img.shields.io/badge/GitHub-Code-black?style=for-the-badge&logo=github)](https://github.com/ShuhongLL/RealX3D)
[![arXiv](https://img.shields.io/badge/arXiv-2512.23437-b31b1b?style=for-the-badge)](https://arxiv.org/abs/2512.23437)
[![Challenge](https://img.shields.io/badge/πŸ†_3DRR_Challenge-NTIRE_@_CVPR_2026-purple?style=for-the-badge)](https://www.codabench.org/competitions/13854/)
[![License](https://img.shields.io/badge/License-MIT-green?style=for-the-badge)](https://opensource.org/licenses/MIT)
</div>
**RealX3D** is a real-world benchmark dataset for multi-view 3D reconstruction under challenging capture conditions. It provides multi-view RGB images (both processed JPEG and Sony RAW), COLMAP sparse reconstructions, and high-precision 3D ground-truth geometry (point clouds, meshes, and rendered depth maps) across a diverse set of scenes and degradation types.
<div align="center">
<table>
<tr>
<td align="center"><b>πŸŒ™ Low Light</b></td>
<td align="center"><b>πŸ’¨ Smoke</b></td>
</tr>
<tr>
<td align="center">
<video src="https://raw.githubusercontent.com/I2WM/i2wm.github.io/main/3DRR_2026/static/videos/lowlight_teaser_compressed.mp4" width="400" controls autoplay muted loop></video>
</td>
<td align="center">
<video src="https://raw.githubusercontent.com/I2WM/i2wm.github.io/main/3DRR_2026/static/videos/smoke_teaser_compressed.mp4" width="400" controls autoplay muted loop></video>
</td>
</tr>
</table>
</div>
## ✨ Key Features
- **9 real-world degradation conditions**: defocus (mild/strong), motion blur (mild/strong), low light, smoke, reflection, dynamic objects, and varying exposure.
- **Full-resolution (\~7000Γ—4700) and quarter-resolution (\~1800Γ—1200)** JPEG images with COLMAP reconstructions.
- **Sony RAW (ARW)** sensor data with complete EXIF metadata for 7 conditions.
- **Per-frame metric depth maps** rendered from laser-scanned meshes.
- **Camera poses and intrinsics** in both COLMAP binary format and NeRF-compatible `transforms.json`.
## πŸ“ Dataset Structure
```
RealX3D/
β”œβ”€β”€ data/ # Full-resolution JPEG images + COLMAP reconstructions
β”œβ”€β”€ data_4/ # Quarter-resolution JPEG images + COLMAP reconstructions
β”œβ”€β”€ baseline_results/ # Baseline methods rendering results on data_4 for direct download
β”œβ”€β”€ data_arw/ # Sony RAW (ARW) sensor data
β”œβ”€β”€ pointclouds/ # 3D point clouds, meshes, and metric depth maps
└── scripts/ # Utilities scripts
```
## πŸš€ Release Status
> - [x] `data/` β€” Full-resolution JPEG images + COLMAP
> - [x] `data_4/` β€” Quarter-resolution JPEG images + COLMAP
> - [x] `baseline_results/` - Baseline rendering results
> - [ ] `data_arw/` β€” Sony RAW (ARW) sensor data
> - [ ] `pointclouds/` β€” 3D ground-truth geometry (point clouds, meshes, depth maps)
## 🌧️ Capture Conditions
| Condition | Description |
|-----------|-------------|
| `defocus_mild` | Mild defocus blur |
| `defocus_strong` | Strong defocus blur |
| `motion_mild` | Mild motion blur |
| `motion_strong` | Strong motion blur |
| `dynamic` | Dynamic objects in the scene |
| `reflection` | Specular reflections |
| `lowlight` | Low-light environment |
| `smoke` | Smoke / particulate occlusion |
| `varyexp` | Varying exposure |
## πŸ›οΈ Scenes
Akikaze, BlueHawaii, Chocolate, Cupcake, GearWorks, Hinoki, Koharu, Laboratory, Limon, MilkCookie, Natsume, Popcorn, Sculpture, Shirohana, Ujikintoki
---
## πŸ“Έ `data/` β€” Full-Resolution JPEG Images
Full-resolution JPEG images and corresponding COLMAP sparse reconstructions, organized by **condition β†’ scene**.
### Per-Scene Directory Layout
```
data/{condition}/{scene}/
β”œβ”€β”€ train/ # Training images (~23–31 frames)
β”‚ β”œβ”€β”€ 0001.JPG
β”‚ └── ...
β”œβ”€β”€ val/ # Validation images (~23–31 frames)
β”‚ └── ...
β”œβ”€β”€ test/ # Test images (~4–6 frames)
β”‚ └── ...
β”œβ”€β”€ transforms_train.json # Camera parameters & poses (training split)
β”œβ”€β”€ transforms_val.json # Camera parameters & poses (validation split)
β”œβ”€β”€ transforms_test.json # Camera parameters & poses (test split)
β”œβ”€β”€ point3d.ply # COLMAP sparse 3D point cloud
β”œβ”€β”€ colmap2world.txt # 4Γ—4 COLMAP-to-world coordinate transform
β”œβ”€β”€ sparse/0/ # COLMAP sparse reconstruction
β”‚ β”œβ”€β”€ cameras.bin / cameras.txt
β”‚ β”œβ”€β”€ images.bin / images.txt
β”‚ └── points3D.bin / points3D.txt
β”œβ”€β”€ distorted/sparse/0/ # Pre-undistortion COLMAP reconstruction
└── stereo/ # MVS configuration files
```
### πŸ“ `transforms.json` Format
Each `transforms_*.json` file contains shared camera intrinsics and per-frame extrinsics following [`Blender Dataset`](https://docs.nerf.studio/quickstart/data_conventions.html) format, for example:
```json
{
"camera_angle_x": 1.295,
"camera_angle_y": 0.899,
"fl_x": 4778.31,
"fl_y": 4928.04,
"cx": 3649.23,
"cy": 2343.41,
"w": 7229.0,
"h": 4754.0,
"k1": 0, "k2": 0, "k3": 0, "k4": 0,
"p1": 0, "p2": 0,
"is_fisheye": false,
"aabb_scale": 2,
"frames": [
{
"file_path": "train/0001.JPG",
"sharpness": 25.72,
"transform_matrix": [[...], [...], [...], [...]]
}
]
}
```
All distortion coefficients are zero (images are pre-undistorted).
### πŸ–ΌοΈ Image Specifications
- **Format**: JPEG
- **Resolution**: ~7000 Γ— 4700 pixels (varies slightly across scenes)
- **Camera**: Sony ILCE-7M4 (Ξ±7 IV)
- **Camera Model**: PINHOLE (pre-undistorted)
---
## πŸ“Έ `data_4/` β€” Quarter-Resolution JPEG Images (Used for 2026 NTIRE-3DRR Challenge)
Identical directory structure to `data/`, with images downsampled to **1/4 resolution** (~1800 Γ— 1200 pixels). Camera intrinsics (`fl_x`, `fl_y`, `cx`, `cy`, `w`, `h`) in the `transforms.json` files are adjusted accordingly. All 9 capture conditions and their scenes are included.
---
## πŸ“· `data_arw/` β€” Sony RAW Data
Sony ARW (TIFF-wrapped RAW) sensor data preserving full EXIF metadata.
### Differences from `data/`
- **Image format**: `.ARW` (~33–35 MB per frame)
- **7 conditions available**: `defocus_mild`, `defocus_strong`, `dynamic`, `lowlight`, `reflection`, `smoke`, `varyexp` (motion blur conditions are **excluded**)
### Per-Scene Directory Layout
```
data_arw/{condition}/{scene}/
β”œβ”€β”€ train/ # ARW raw images
β”œβ”€β”€ val/
β”œβ”€β”€ test/
└── sparse/0/ # COLMAP sparse reconstruction
```
---
## πŸ“ `pointclouds/` β€” 3D Ground Truth
High-precision 3D geometry ground truth, organized directly by **scene name** (geometry is shared across capture conditions for the same scene).
### Per-Scene Directory Layout
```
pointclouds/{scene}/
β”œβ”€β”€ cull_pointcloud.ply # Culled point cloud (view-frustum trimmed)
β”œβ”€β”€ cull_mesh.ply # Culled triangle mesh
β”œβ”€β”€ colmap2world.npy # 4Γ—4 COLMAP-to-world transform (NumPy format)
└── depth/ # 16-bit Depth maps rendered from the mesh
β”œβ”€β”€ 0001.png
β”œβ”€β”€ 0002.png
└── ...
```
The `colmap2world.npy` matrix aligns COLMAP reconstructions to the world coordinate system of the ground-truth geometry. The same transform is also stored as `colmap2world.txt` in the corresponding `data/` directories.
---
## πŸ“œ Citation
```bibtex
@article{liu2025realx3d,
title = {RealX3D: A Physically-Degraded 3D Benchmark for Multi-view
Visual Restoration and Reconstruction},
author = {Liu, Shuhong and Bao, Chenyu and Cui, Ziteng and Liu, Yun
and Chu, Xuangeng and Gu, Lin and Conde, Marcos V and
Umagami, Ryo and Hashimoto, Tomohiro and Hu, Zijian and others},
journal = {arXiv preprint arXiv:2512.23437},
year = {2025}
}
```
---
## πŸ“„ License
This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).