File size: 8,702 Bytes
db59096
cbcbf3c
db59096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbcbf3c
 
 
 
 
 
 
 
 
 
 
db59096
d27284a
db59096
cbcbf3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db59096
 
cbcbf3c
db59096
cbcbf3c
db59096
 
cbcbf3c
db59096
 
 
 
 
1e299b9
db59096
1e299b9
 
db59096
 
d27284a
 
 
 
1e299b9
d27284a
 
 
 
cbcbf3c
db59096
 
 
 
 
 
 
 
 
 
 
 
 
cbcbf3c
db59096
 
 
 
 
cbcbf3c
db59096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbcbf3c
db59096
cbcbf3c
db59096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbcbf3c
db59096
cbcbf3c
db59096
 
 
 
 
 
 
 
cbcbf3c
db59096
 
 
 
 
cbcbf3c
db59096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbcbf3c
db59096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbcbf3c
db59096
 
 
cbcbf3c
 
 
 
 
 
 
db59096
 
 
 
 
cbcbf3c
db59096
cbcbf3c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
---
license: mit
task_categories:
  - image-to-3d
  - depth-estimation
  - image-to-image
tags:
  - 3d-reconstruction
  - multi-view
  - nerf
  - 3d-gaussian-splatting
  - novel-view-synthesis
  - benchmark
  - colmap
  - point-cloud
  - depth-map
  - raw-image
  - computational-photography
pretty_name: "RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction"
size_categories:
  - 1K<n<10K
---

<div align="center">

# RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction

[![Project Page](https://img.shields.io/badge/🌐_Project_Page-RealX3D-blue?style=for-the-badge)](https://i2wm.github.io/3DRR_2026/)
[![GitHub](https://img.shields.io/badge/GitHub-Code-black?style=for-the-badge&logo=github)](https://github.com/ShuhongLL/RealX3D)
[![arXiv](https://img.shields.io/badge/arXiv-2512.23437-b31b1b?style=for-the-badge)](https://arxiv.org/abs/2512.23437)
[![Challenge](https://img.shields.io/badge/πŸ†_3DRR_Challenge-NTIRE_@_CVPR_2026-purple?style=for-the-badge)](https://www.codabench.org/competitions/13854/)
[![License](https://img.shields.io/badge/License-MIT-green?style=for-the-badge)](https://opensource.org/licenses/MIT)

</div>

**RealX3D** is a real-world benchmark dataset for multi-view 3D reconstruction under challenging capture conditions. It provides multi-view RGB images (both processed JPEG and Sony RAW), COLMAP sparse reconstructions, and high-precision 3D ground-truth geometry (point clouds, meshes, and rendered depth maps) across a diverse set of scenes and degradation types.

<div align="center">
<table>
<tr>
<td align="center"><b>πŸŒ™ Low Light</b></td>
<td align="center"><b>πŸ’¨ Smoke</b></td>
</tr>
<tr>
<td align="center">
<video src="https://raw.githubusercontent.com/I2WM/i2wm.github.io/main/3DRR_2026/static/videos/lowlight_teaser_compressed.mp4" width="400" controls autoplay muted loop></video>
</td>
<td align="center">
<video src="https://raw.githubusercontent.com/I2WM/i2wm.github.io/main/3DRR_2026/static/videos/smoke_teaser_compressed.mp4" width="400" controls autoplay muted loop></video>
</td>
</tr>
</table>
</div>

## ✨ Key Features

- **9 real-world degradation conditions**: defocus (mild/strong), motion blur (mild/strong), low light, smoke, reflection, dynamic objects, and varying exposure.
- **Full-resolution (\~7000Γ—4700) and quarter-resolution (\~1800Γ—1200)** JPEG images with COLMAP reconstructions.
- **Sony RAW (ARW)** sensor data with complete EXIF metadata for 7 conditions.
- **Per-frame metric depth maps** rendered from laser-scanned meshes.
- **Camera poses and intrinsics** in both COLMAP binary format and NeRF-compatible `transforms.json`.

## πŸ“ Dataset Structure

```
RealX3D/
β”œβ”€β”€ data/              # Full-resolution JPEG images + COLMAP reconstructions
β”œβ”€β”€ data_4/            # Quarter-resolution JPEG images + COLMAP reconstructions
β”œβ”€β”€ baseline_results/  # Baseline methods rendering results on data_4 for direct download
β”œβ”€β”€ data_arw/          # Sony RAW (ARW) sensor data
β”œβ”€β”€ pointclouds/       # 3D point clouds, meshes, and metric depth maps
└── scripts/           # Utilities scripts
```

## πŸš€ Release Status

> - [x] `data/` β€” Full-resolution JPEG images + COLMAP
> - [x] `data_4/` β€” Quarter-resolution JPEG images + COLMAP
> - [x] `baseline_results/` - Baseline rendering results
> - [ ] `data_arw/` β€” Sony RAW (ARW) sensor data
> - [ ] `pointclouds/` β€” 3D ground-truth geometry (point clouds, meshes, depth maps)


## 🌧️ Capture Conditions

| Condition | Description |
|-----------|-------------|
| `defocus_mild` | Mild defocus blur |
| `defocus_strong` | Strong defocus blur |
| `motion_mild` | Mild motion blur |
| `motion_strong` | Strong motion blur |
| `dynamic` | Dynamic objects in the scene |
| `reflection` | Specular reflections |
| `lowlight` | Low-light environment |
| `smoke` | Smoke / particulate occlusion |
| `varyexp` | Varying exposure |

## πŸ›οΈ Scenes

Akikaze, BlueHawaii, Chocolate, Cupcake, GearWorks, Hinoki, Koharu, Laboratory, Limon, MilkCookie, Natsume, Popcorn, Sculpture, Shirohana, Ujikintoki

---

## πŸ“Έ `data/` β€” Full-Resolution JPEG Images

Full-resolution JPEG images and corresponding COLMAP sparse reconstructions, organized by **condition β†’ scene**.

### Per-Scene Directory Layout

```
data/{condition}/{scene}/
β”œβ”€β”€ train/                    # Training images (~23–31 frames)
β”‚   β”œβ”€β”€ 0001.JPG
β”‚   └── ...
β”œβ”€β”€ val/                      # Validation images (~23–31 frames)
β”‚   └── ...
β”œβ”€β”€ test/                     # Test images (~4–6 frames)
β”‚   └── ...
β”œβ”€β”€ transforms_train.json     # Camera parameters & poses (training split)
β”œβ”€β”€ transforms_val.json       # Camera parameters & poses (validation split)
β”œβ”€β”€ transforms_test.json      # Camera parameters & poses (test split)
β”œβ”€β”€ point3d.ply               # COLMAP sparse 3D point cloud
β”œβ”€β”€ colmap2world.txt          # 4Γ—4 COLMAP-to-world coordinate transform
β”œβ”€β”€ sparse/0/                 # COLMAP sparse reconstruction
β”‚   β”œβ”€β”€ cameras.bin / cameras.txt
β”‚   β”œβ”€β”€ images.bin / images.txt
β”‚   └── points3D.bin / points3D.txt
β”œβ”€β”€ distorted/sparse/0/       # Pre-undistortion COLMAP reconstruction
└── stereo/                   # MVS configuration files
```

### πŸ“ `transforms.json` Format

Each `transforms_*.json` file contains shared camera intrinsics and per-frame extrinsics following [`Blender Dataset`](https://docs.nerf.studio/quickstart/data_conventions.html) format, for example:

```json
{
  "camera_angle_x": 1.295,
  "camera_angle_y": 0.899,
  "fl_x": 4778.31,
  "fl_y": 4928.04,
  "cx": 3649.23,
  "cy": 2343.41,
  "w": 7229.0,
  "h": 4754.0,
  "k1": 0, "k2": 0, "k3": 0, "k4": 0,
  "p1": 0, "p2": 0,
  "is_fisheye": false,
  "aabb_scale": 2,
  "frames": [
    {
      "file_path": "train/0001.JPG",
      "sharpness": 25.72,
      "transform_matrix": [[...], [...], [...], [...]]
    }
  ]
}
```

All distortion coefficients are zero (images are pre-undistorted).

### πŸ–ΌοΈ Image Specifications

- **Format**: JPEG
- **Resolution**: ~7000 Γ— 4700 pixels (varies slightly across scenes)
- **Camera**: Sony ILCE-7M4 (Ξ±7 IV)
- **Camera Model**: PINHOLE (pre-undistorted)

---

## πŸ“Έ `data_4/` β€” Quarter-Resolution JPEG Images (Used for 2026 NTIRE-3DRR Challenge)

Identical directory structure to `data/`, with images downsampled to **1/4 resolution** (~1800 Γ— 1200 pixels). Camera intrinsics (`fl_x`, `fl_y`, `cx`, `cy`, `w`, `h`) in the `transforms.json` files are adjusted accordingly. All 9 capture conditions and their scenes are included.

---

## πŸ“· `data_arw/` β€” Sony RAW Data

Sony ARW (TIFF-wrapped RAW) sensor data preserving full EXIF metadata.

### Differences from `data/`

- **Image format**: `.ARW` (~33–35 MB per frame)
- **7 conditions available**: `defocus_mild`, `defocus_strong`, `dynamic`, `lowlight`, `reflection`, `smoke`, `varyexp` (motion blur conditions are **excluded**)

### Per-Scene Directory Layout

```
data_arw/{condition}/{scene}/
β”œβ”€β”€ train/              # ARW raw images
β”œβ”€β”€ val/
β”œβ”€β”€ test/
└── sparse/0/           # COLMAP sparse reconstruction
```

---

## πŸ“ `pointclouds/` β€” 3D Ground Truth

High-precision 3D geometry ground truth, organized directly by **scene name** (geometry is shared across capture conditions for the same scene).

### Per-Scene Directory Layout

```
pointclouds/{scene}/
β”œβ”€β”€ cull_pointcloud.ply   # Culled point cloud (view-frustum trimmed)
β”œβ”€β”€ cull_mesh.ply         # Culled triangle mesh
β”œβ”€β”€ colmap2world.npy      # 4Γ—4 COLMAP-to-world transform (NumPy format)
└── depth/                # 16-bit Depth maps rendered from the mesh
    β”œβ”€β”€ 0001.png
    β”œβ”€β”€ 0002.png
    └── ...
```

The `colmap2world.npy` matrix aligns COLMAP reconstructions to the world coordinate system of the ground-truth geometry. The same transform is also stored as `colmap2world.txt` in the corresponding `data/` directories.

---

## πŸ“œ Citation

```bibtex
@article{liu2025realx3d,
  title   = {RealX3D: A Physically-Degraded 3D Benchmark for Multi-view
             Visual Restoration and Reconstruction},
  author  = {Liu, Shuhong and Bao, Chenyu and Cui, Ziteng and Liu, Yun
             and Chu, Xuangeng and Gu, Lin and Conde, Marcos V and
             Umagami, Ryo and Hashimoto, Tomohiro and Hu, Zijian and others},
  journal = {arXiv preprint arXiv:2512.23437},
  year    = {2025}
}
```

---

## πŸ“„ License

This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).