RealX3D / README.md
ToferFish's picture
Update README.md
1e299b9 verified
metadata
license: mit
task_categories:
  - image-to-3d
  - depth-estimation
  - image-to-image
tags:
  - 3d-reconstruction
  - multi-view
  - nerf
  - 3d-gaussian-splatting
  - novel-view-synthesis
  - benchmark
  - colmap
  - point-cloud
  - depth-map
  - raw-image
  - computational-photography
pretty_name: >-
  RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration
  and Reconstruction
size_categories:
  - 1K<n<10K

RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction

Project Page GitHub arXiv Challenge License

RealX3D is a real-world benchmark dataset for multi-view 3D reconstruction under challenging capture conditions. It provides multi-view RGB images (both processed JPEG and Sony RAW), COLMAP sparse reconstructions, and high-precision 3D ground-truth geometry (point clouds, meshes, and rendered depth maps) across a diverse set of scenes and degradation types.

πŸŒ™ Low Light πŸ’¨ Smoke

✨ Key Features

  • 9 real-world degradation conditions: defocus (mild/strong), motion blur (mild/strong), low light, smoke, reflection, dynamic objects, and varying exposure.
  • Full-resolution (~7000Γ—4700) and quarter-resolution (~1800Γ—1200) JPEG images with COLMAP reconstructions.
  • Sony RAW (ARW) sensor data with complete EXIF metadata for 7 conditions.
  • Per-frame metric depth maps rendered from laser-scanned meshes.
  • Camera poses and intrinsics in both COLMAP binary format and NeRF-compatible transforms.json.

πŸ“ Dataset Structure

RealX3D/
β”œβ”€β”€ data/              # Full-resolution JPEG images + COLMAP reconstructions
β”œβ”€β”€ data_4/            # Quarter-resolution JPEG images + COLMAP reconstructions
β”œβ”€β”€ baseline_results/  # Baseline methods rendering results on data_4 for direct download
β”œβ”€β”€ data_arw/          # Sony RAW (ARW) sensor data
β”œβ”€β”€ pointclouds/       # 3D point clouds, meshes, and metric depth maps
└── scripts/           # Utilities scripts

πŸš€ Release Status

  • data/ β€” Full-resolution JPEG images + COLMAP
  • data_4/ β€” Quarter-resolution JPEG images + COLMAP
  • baseline_results/ - Baseline rendering results
  • data_arw/ β€” Sony RAW (ARW) sensor data
  • pointclouds/ β€” 3D ground-truth geometry (point clouds, meshes, depth maps)

🌧️ Capture Conditions

Condition Description
defocus_mild Mild defocus blur
defocus_strong Strong defocus blur
motion_mild Mild motion blur
motion_strong Strong motion blur
dynamic Dynamic objects in the scene
reflection Specular reflections
lowlight Low-light environment
smoke Smoke / particulate occlusion
varyexp Varying exposure

πŸ›οΈ Scenes

Akikaze, BlueHawaii, Chocolate, Cupcake, GearWorks, Hinoki, Koharu, Laboratory, Limon, MilkCookie, Natsume, Popcorn, Sculpture, Shirohana, Ujikintoki


πŸ“Έ data/ β€” Full-Resolution JPEG Images

Full-resolution JPEG images and corresponding COLMAP sparse reconstructions, organized by condition β†’ scene.

Per-Scene Directory Layout

data/{condition}/{scene}/
β”œβ”€β”€ train/                    # Training images (~23–31 frames)
β”‚   β”œβ”€β”€ 0001.JPG
β”‚   └── ...
β”œβ”€β”€ val/                      # Validation images (~23–31 frames)
β”‚   └── ...
β”œβ”€β”€ test/                     # Test images (~4–6 frames)
β”‚   └── ...
β”œβ”€β”€ transforms_train.json     # Camera parameters & poses (training split)
β”œβ”€β”€ transforms_val.json       # Camera parameters & poses (validation split)
β”œβ”€β”€ transforms_test.json      # Camera parameters & poses (test split)
β”œβ”€β”€ point3d.ply               # COLMAP sparse 3D point cloud
β”œβ”€β”€ colmap2world.txt          # 4Γ—4 COLMAP-to-world coordinate transform
β”œβ”€β”€ sparse/0/                 # COLMAP sparse reconstruction
β”‚   β”œβ”€β”€ cameras.bin / cameras.txt
β”‚   β”œβ”€β”€ images.bin / images.txt
β”‚   └── points3D.bin / points3D.txt
β”œβ”€β”€ distorted/sparse/0/       # Pre-undistortion COLMAP reconstruction
└── stereo/                   # MVS configuration files

πŸ“ transforms.json Format

Each transforms_*.json file contains shared camera intrinsics and per-frame extrinsics following Blender Dataset format, for example:

{
  "camera_angle_x": 1.295,
  "camera_angle_y": 0.899,
  "fl_x": 4778.31,
  "fl_y": 4928.04,
  "cx": 3649.23,
  "cy": 2343.41,
  "w": 7229.0,
  "h": 4754.0,
  "k1": 0, "k2": 0, "k3": 0, "k4": 0,
  "p1": 0, "p2": 0,
  "is_fisheye": false,
  "aabb_scale": 2,
  "frames": [
    {
      "file_path": "train/0001.JPG",
      "sharpness": 25.72,
      "transform_matrix": [[...], [...], [...], [...]]
    }
  ]
}

All distortion coefficients are zero (images are pre-undistorted).

πŸ–ΌοΈ Image Specifications

  • Format: JPEG
  • Resolution: ~7000 Γ— 4700 pixels (varies slightly across scenes)
  • Camera: Sony ILCE-7M4 (Ξ±7 IV)
  • Camera Model: PINHOLE (pre-undistorted)

πŸ“Έ data_4/ β€” Quarter-Resolution JPEG Images (Used for 2026 NTIRE-3DRR Challenge)

Identical directory structure to data/, with images downsampled to 1/4 resolution (~1800 Γ— 1200 pixels). Camera intrinsics (fl_x, fl_y, cx, cy, w, h) in the transforms.json files are adjusted accordingly. All 9 capture conditions and their scenes are included.


πŸ“· data_arw/ β€” Sony RAW Data

Sony ARW (TIFF-wrapped RAW) sensor data preserving full EXIF metadata.

Differences from data/

  • Image format: .ARW (~33–35 MB per frame)
  • 7 conditions available: defocus_mild, defocus_strong, dynamic, lowlight, reflection, smoke, varyexp (motion blur conditions are excluded)

Per-Scene Directory Layout

data_arw/{condition}/{scene}/
β”œβ”€β”€ train/              # ARW raw images
β”œβ”€β”€ val/
β”œβ”€β”€ test/
└── sparse/0/           # COLMAP sparse reconstruction

πŸ“ pointclouds/ β€” 3D Ground Truth

High-precision 3D geometry ground truth, organized directly by scene name (geometry is shared across capture conditions for the same scene).

Per-Scene Directory Layout

pointclouds/{scene}/
β”œβ”€β”€ cull_pointcloud.ply   # Culled point cloud (view-frustum trimmed)
β”œβ”€β”€ cull_mesh.ply         # Culled triangle mesh
β”œβ”€β”€ colmap2world.npy      # 4Γ—4 COLMAP-to-world transform (NumPy format)
└── depth/                # 16-bit Depth maps rendered from the mesh
    β”œβ”€β”€ 0001.png
    β”œβ”€β”€ 0002.png
    └── ...

The colmap2world.npy matrix aligns COLMAP reconstructions to the world coordinate system of the ground-truth geometry. The same transform is also stored as colmap2world.txt in the corresponding data/ directories.


πŸ“œ Citation

@article{liu2025realx3d,
  title   = {RealX3D: A Physically-Degraded 3D Benchmark for Multi-view
             Visual Restoration and Reconstruction},
  author  = {Liu, Shuhong and Bao, Chenyu and Cui, Ziteng and Liu, Yun
             and Chu, Xuangeng and Gu, Lin and Conde, Marcos V and
             Umagami, Ryo and Hashimoto, Tomohiro and Hu, Zijian and others},
  journal = {arXiv preprint arXiv:2512.23437},
  year    = {2025}
}

πŸ“„ License

This dataset is released under the MIT License.