Datasets:
license: cc-by-4.0
task_categories:
- image-to-3d
tags:
- 3d-gaussian-splatting
- novel-view-synthesis
- deblurring
- sparse-views
- 3d-reconstruction
CoherentGS-DL3DV-Blur Dataset
CoherentGS tackles one of the hardest regimes for 3D Gaussian Splatting (3DGS): Sparse inputs with severe motion blur. We break the "vicious cycle" between missing viewpoints and degraded photometry by coupling a physics-aware deblurring prior with diffusion-driven geometry completion, enabling coherent, high-frequency reconstructions from as few as 3β9 views on both synthetic and real scenes.
Paper: Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views Project Page: https://potatobigroom.github.io/CoherentGS/ Code: https://github.com/PotatoBigRoom/CoherentGS
Motivation π‘
To rigorously assess the generalization capability of CoherentGS in complex, unconstrained outdoor environments, we establish a new benchmark named DL3DV-Blur. This benchmark is derived from five diverse scenes within the DL3DV-10K dataset.
Citation Reference: Ling et al. (2024). DL3DV-10K: A Large-scale Dataset for Deep Learning-based 3D Vision.
https://arxiv.org/abs/2312.16256
Dataset Source π
This dataset is constructed from select scenes of the official DL3DV-10K repository.
- DL3DV-10K GitHub: https://github.com/DL3DV-10K/Dataset
Data Format π
The dataset structure adheres to standard 3D vision dataset formats, where each scene (e.g., 0001) contains sub-folders for different view configurations (e.g., 3views, 6views, 9views).
Structure Overview
The hierarchical structure of the data is as follows:
dl3dv/
βββ 0641-0720/
β βββ 0001/ # Scene ID 0001
β β βββ .work/
β β βββ 3views/ # 3-View Sub-set
β β β βββ images/ # Raw input image files
β β β βββ ref_image/ # Reference Image
β β β βββ sparse/ # Sparse reconstruction results (e.g., COLMAP output)
β β β βββ cameras.json # Camera parameter file
β β β βββ ext_metadata.json # Additional metadata
β β β βββ hold=7 # Test set configuration
β β β βββ intrinsics.json # Camera intrinsics
β β β βββ poses_bounds.npy # Camera poses and scene bounds
β β β βββ train_test_split_3.json # Train/Test split definition
β β β βββ transforms.json # Coordinate transformation info
β β βββ 6views/ # 6-View Sub-set
β β βββ 9views/ # 9-View Sub-set
β βββ 0002/
β βββ 0003/
β βββ 0004/
β βββ 0005/
βββ ...
Sample Usage
Installation
Tested with Python 3.10 and PyTorch 2.1.2 (CUDA 11.8). Adjust CUDA wheels as needed for your platform.
# (Optional) fresh conda env
conda create --name CoherentGS -y "python<3.11"
conda activate CoherentGS
# Install dependencies
pip install --upgrade pip setuptools
pip install "torch==2.1.2+cu118" "torchvision==0.16.2+cu118" --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
Data
Download DL3DV-Blur and related assets from this Hugging Face dataset.
Place downloaded data under datasets/ (or adjust paths in the provided scripts).
Training
Train on DL3DV-Blur (full resolution) with:
bash run_dl3dv.sh
For custom settings, start from run.sh and tweak dataset paths, resolution, and batch sizes.
Citation
If CoherentGS supports your research, please cite:
@article{feng2025coherentgs,
author = {Feng, Chaoran and Xu, Zhankuo and Li, Yingtao and Zhao, Jianbin and Yang, Jiashu and Yu, Wangbo and Yuan, Li and Tian, Yonghong},
title = {Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views},
year = {2025},
}