--- license: mit task_categories: - image-to-3d - depth-estimation - image-to-image tags: - 3d-reconstruction - multi-view - nerf - 3d-gaussian-splatting - novel-view-synthesis - benchmark - colmap - point-cloud - depth-map - raw-image - computational-photography pretty_name: "RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction" size_categories: - 1K # RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction [![Project Page](https://img.shields.io/badge/🌐_Project_Page-RealX3D-blue?style=for-the-badge)](https://i2wm.github.io/3DRR_2026/) [![GitHub](https://img.shields.io/badge/GitHub-Code-black?style=for-the-badge&logo=github)](https://github.com/ShuhongLL/RealX3D) [![arXiv](https://img.shields.io/badge/arXiv-2512.23437-b31b1b?style=for-the-badge)](https://arxiv.org/abs/2512.23437) [![Challenge](https://img.shields.io/badge/🏆_3DRR_Challenge-NTIRE_@_CVPR_2026-purple?style=for-the-badge)](https://www.codabench.org/competitions/13854/) [![License](https://img.shields.io/badge/License-MIT-green?style=for-the-badge)](https://opensource.org/licenses/MIT) **RealX3D** is a real-world benchmark dataset for multi-view 3D reconstruction under challenging capture conditions. It provides multi-view RGB images (both processed JPEG and Sony RAW), COLMAP sparse reconstructions, and high-precision 3D ground-truth geometry (point clouds, meshes, and rendered depth maps) across a diverse set of scenes and degradation types.
🌙 Low Light 💨 Smoke
## ✨ Key Features - **9 real-world degradation conditions**: defocus (mild/strong), motion blur (mild/strong), low light, smoke, reflection, dynamic objects, and varying exposure. - **Full-resolution (\~7000×4700) and quarter-resolution (\~1800×1200)** JPEG images with COLMAP reconstructions. - **Sony RAW (ARW)** sensor data with complete EXIF metadata for 7 conditions. - **Per-frame metric depth maps** rendered from laser-scanned meshes. - **Camera poses and intrinsics** in both COLMAP binary format and NeRF-compatible `transforms.json`. ## 📁 Dataset Structure ``` RealX3D/ ├── data/ # Full-resolution JPEG images + COLMAP reconstructions ├── data_4/ # Quarter-resolution JPEG images + COLMAP reconstructions ├── baseline_results/ # Baseline methods rendering results on data_4 for direct download ├── data_arw/ # Sony RAW (ARW) sensor data ├── pointclouds/ # 3D point clouds, meshes, and metric depth maps └── scripts/ # Utilities scripts ``` ## 🚀 Release Status > - [x] `data/` — Full-resolution JPEG images + COLMAP > - [x] `data_4/` — Quarter-resolution JPEG images + COLMAP > - [x] `baseline_results/` - Baseline rendering results > - [ ] `data_arw/` — Sony RAW (ARW) sensor data > - [ ] `pointclouds/` — 3D ground-truth geometry (point clouds, meshes, depth maps) ## 🌧️ Capture Conditions | Condition | Description | |-----------|-------------| | `defocus_mild` | Mild defocus blur | | `defocus_strong` | Strong defocus blur | | `motion_mild` | Mild motion blur | | `motion_strong` | Strong motion blur | | `dynamic` | Dynamic objects in the scene | | `reflection` | Specular reflections | | `lowlight` | Low-light environment | | `smoke` | Smoke / particulate occlusion | | `varyexp` | Varying exposure | ## 🏛️ Scenes Akikaze, BlueHawaii, Chocolate, Cupcake, GearWorks, Hinoki, Koharu, Laboratory, Limon, MilkCookie, Natsume, Popcorn, Sculpture, Shirohana, Ujikintoki --- ## 📸 `data/` — Full-Resolution JPEG Images Full-resolution JPEG images and corresponding COLMAP sparse reconstructions, organized by **condition → scene**. ### Per-Scene Directory Layout ``` data/{condition}/{scene}/ ├── train/ # Training images (~23–31 frames) │ ├── 0001.JPG │ └── ... ├── val/ # Validation images (~23–31 frames) │ └── ... ├── test/ # Test images (~4–6 frames) │ └── ... ├── transforms_train.json # Camera parameters & poses (training split) ├── transforms_val.json # Camera parameters & poses (validation split) ├── transforms_test.json # Camera parameters & poses (test split) ├── point3d.ply # COLMAP sparse 3D point cloud ├── colmap2world.txt # 4×4 COLMAP-to-world coordinate transform ├── sparse/0/ # COLMAP sparse reconstruction │ ├── cameras.bin / cameras.txt │ ├── images.bin / images.txt │ └── points3D.bin / points3D.txt ├── distorted/sparse/0/ # Pre-undistortion COLMAP reconstruction └── stereo/ # MVS configuration files ``` ### 📐 `transforms.json` Format Each `transforms_*.json` file contains shared camera intrinsics and per-frame extrinsics following [`Blender Dataset`](https://docs.nerf.studio/quickstart/data_conventions.html) format, for example: ```json { "camera_angle_x": 1.295, "camera_angle_y": 0.899, "fl_x": 4778.31, "fl_y": 4928.04, "cx": 3649.23, "cy": 2343.41, "w": 7229.0, "h": 4754.0, "k1": 0, "k2": 0, "k3": 0, "k4": 0, "p1": 0, "p2": 0, "is_fisheye": false, "aabb_scale": 2, "frames": [ { "file_path": "train/0001.JPG", "sharpness": 25.72, "transform_matrix": [[...], [...], [...], [...]] } ] } ``` All distortion coefficients are zero (images are pre-undistorted). ### 🖼️ Image Specifications - **Format**: JPEG - **Resolution**: ~7000 × 4700 pixels (varies slightly across scenes) - **Camera**: Sony ILCE-7M4 (α7 IV) - **Camera Model**: PINHOLE (pre-undistorted) --- ## 📸 `data_4/` — Quarter-Resolution JPEG Images (Used for 2026 NTIRE-3DRR Challenge) Identical directory structure to `data/`, with images downsampled to **1/4 resolution** (~1800 × 1200 pixels). Camera intrinsics (`fl_x`, `fl_y`, `cx`, `cy`, `w`, `h`) in the `transforms.json` files are adjusted accordingly. All 9 capture conditions and their scenes are included. --- ## 📷 `data_arw/` — Sony RAW Data Sony ARW (TIFF-wrapped RAW) sensor data preserving full EXIF metadata. ### Differences from `data/` - **Image format**: `.ARW` (~33–35 MB per frame) - **7 conditions available**: `defocus_mild`, `defocus_strong`, `dynamic`, `lowlight`, `reflection`, `smoke`, `varyexp` (motion blur conditions are **excluded**) ### Per-Scene Directory Layout ``` data_arw/{condition}/{scene}/ ├── train/ # ARW raw images ├── val/ ├── test/ └── sparse/0/ # COLMAP sparse reconstruction ``` --- ## 📍 `pointclouds/` — 3D Ground Truth High-precision 3D geometry ground truth, organized directly by **scene name** (geometry is shared across capture conditions for the same scene). ### Per-Scene Directory Layout ``` pointclouds/{scene}/ ├── cull_pointcloud.ply # Culled point cloud (view-frustum trimmed) ├── cull_mesh.ply # Culled triangle mesh ├── colmap2world.npy # 4×4 COLMAP-to-world transform (NumPy format) └── depth/ # 16-bit Depth maps rendered from the mesh ├── 0001.png ├── 0002.png └── ... ``` The `colmap2world.npy` matrix aligns COLMAP reconstructions to the world coordinate system of the ground-truth geometry. The same transform is also stored as `colmap2world.txt` in the corresponding `data/` directories. --- ## 📜 Citation ```bibtex @article{liu2025realx3d, title = {RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction}, author = {Liu, Shuhong and Bao, Chenyu and Cui, Ziteng and Liu, Yun and Chu, Xuangeng and Gu, Lin and Conde, Marcos V and Umagami, Ryo and Hashimoto, Tomohiro and Hu, Zijian and others}, journal = {arXiv preprint arXiv:2512.23437}, year = {2025} } ``` --- ## 📄 License This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).