---
license: cc-by-4.0
task_categories:
- image-to-3d
tags:
- 3d-gaussian-splatting
- novel-view-synthesis
- deblurring
- sparse-views
- 3d-reconstruction
---
# CoherentGS-DL3DV-Blur Dataset
CoherentGS tackles one of the hardest regimes for 3D Gaussian Splatting (3DGS): Sparse inputs with severe motion blur. We break the "vicious cycle" between missing viewpoints and degraded photometry by coupling a physics-aware deblurring prior with diffusion-driven geometry completion, enabling coherent, high-frequency reconstructions from as few as 3–9 views on both synthetic and real scenes.
**Paper:** [Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views](https://huggingface.co/papers/2512.10369)
**Project Page:** https://potatobigroom.github.io/CoherentGS/
**Code:** https://github.com/PotatoBigRoom/CoherentGS
## Motivation 💡
To rigorously assess the generalization capability of **CoherentGS** in complex, unconstrained outdoor environments, we establish a new benchmark named **DL3DV-Blur**. This benchmark is derived from five diverse scenes within the DL3DV-10K dataset.
> **Citation Reference:** Ling et al. (2024). DL3DV-10K: A Large-scale Dataset for Deep Learning-based 3D Vision.
> [https://arxiv.org/abs/2312.16256](https://arxiv.org/abs/2312.16256)
## Dataset Source 🔗
This dataset is constructed from select scenes of the official DL3DV-10K repository.
- **DL3DV-10K GitHub:** https://github.com/DL3DV-10K/Dataset
## Data Format 📂
The dataset structure adheres to standard 3D vision dataset formats, where each scene (e.g., `0001`) contains sub-folders for different view configurations (e.g., `3views`, `6views`, `9views`).
### Structure Overview
The hierarchical structure of the data is as follows:
```text
dl3dv/
├── 0641-0720/
│ ├── 0001/ # Scene ID 0001
│ │ ├── .work/
│ │ ├── 3views/ # 3-View Sub-set
│ │ │ ├── images/ # Raw input image files
│ │ │ ├── ref_image/ # Reference Image
│ │ │ ├── sparse/ # Sparse reconstruction results (e.g., COLMAP output)
│ │ │ ├── cameras.json # Camera parameter file
│ │ │ ├── ext_metadata.json # Additional metadata
│ │ │ ├── hold=7 # Test set configuration
│ │ │ ├── intrinsics.json # Camera intrinsics
│ │ │ ├── poses_bounds.npy # Camera poses and scene bounds
│ │ │ ├── train_test_split_3.json # Train/Test split definition
│ │ │ └── transforms.json # Coordinate transformation info
│ │ ├── 6views/ # 6-View Sub-set
│ │ └── 9views/ # 9-View Sub-set
│ ├── 0002/
│ ├── 0003/
│ ├── 0004/
│ └── 0005/
└── ...
```
## Sample Usage
### Installation
Tested with Python 3.10 and PyTorch 2.1.2 (CUDA 11.8). Adjust CUDA wheels as needed for your platform.
```bash
# (Optional) fresh conda env
conda create --name CoherentGS -y "python<3.11"
conda activate CoherentGS
# Install dependencies
pip install --upgrade pip setuptools
pip install "torch==2.1.2+cu118" "torchvision==0.16.2+cu118" --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
```
### Data
Download DL3DV-Blur and related assets from this Hugging Face dataset.
Place downloaded data under `datasets/` (or adjust paths in the provided scripts).
### Training
Train on DL3DV-Blur (full resolution) with:
```bash
bash run_dl3dv.sh
```
For custom settings, start from `run.sh` and tweak dataset paths, resolution, and batch sizes.
## Citation
If CoherentGS supports your research, please cite:
```bibtex
@article{feng2025coherentgs,
author = {Feng, Chaoran and Xu, Zhankuo and Li, Yingtao and Zhao, Jianbin and Yang, Jiashu and Yu, Wangbo and Yuan, Li and Tian, Yonghong},
title = {Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views},
year = {2025},
}
```