CoherentGS / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add task categories, paper, code, project page, description, image, sample usage, and citation
e9727f0 verified
|
raw
history blame
4.31 kB
metadata
license: cc-by-4.0
task_categories:
  - image-to-3d
tags:
  - 3d-gaussian-splatting
  - novel-view-synthesis
  - deblurring
  - sparse-views
  - 3d-reconstruction

CoherentGS-DL3DV-Blur Dataset

CoherentGS tackles one of the hardest regimes for 3D Gaussian Splatting (3DGS): Sparse inputs with severe motion blur. We break the "vicious cycle" between missing viewpoints and degraded photometry by coupling a physics-aware deblurring prior with diffusion-driven geometry completion, enabling coherent, high-frequency reconstructions from as few as 3–9 views on both synthetic and real scenes.

Paper: Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views Project Page: https://potatobigroom.github.io/CoherentGS/ Code: https://github.com/PotatoBigRoom/CoherentGS

CoherentGS overview

Motivation πŸ’‘

To rigorously assess the generalization capability of CoherentGS in complex, unconstrained outdoor environments, we establish a new benchmark named DL3DV-Blur. This benchmark is derived from five diverse scenes within the DL3DV-10K dataset.

Citation Reference: Ling et al. (2024). DL3DV-10K: A Large-scale Dataset for Deep Learning-based 3D Vision.
https://arxiv.org/abs/2312.16256

Dataset Source πŸ”—

This dataset is constructed from select scenes of the official DL3DV-10K repository.

Data Format πŸ“‚

The dataset structure adheres to standard 3D vision dataset formats, where each scene (e.g., 0001) contains sub-folders for different view configurations (e.g., 3views, 6views, 9views).

Structure Overview

The hierarchical structure of the data is as follows:

dl3dv/
β”œβ”€β”€ 0641-0720/
β”‚   β”œβ”€β”€ 0001/                  # Scene ID 0001
β”‚   β”‚   β”œβ”€β”€ .work/
β”‚   β”‚   β”œβ”€β”€ 3views/            # 3-View Sub-set
β”‚   β”‚   β”‚   β”œβ”€β”€ images/            # Raw input image files 
β”‚   β”‚   β”‚   β”œβ”€β”€ ref_image/         # Reference Image
β”‚   β”‚   β”‚   β”œβ”€β”€ sparse/            # Sparse reconstruction results (e.g., COLMAP output)
β”‚   β”‚   β”‚   β”œβ”€β”€ cameras.json       # Camera parameter file
β”‚   β”‚   β”‚   β”œβ”€β”€ ext_metadata.json  # Additional metadata
β”‚   β”‚   β”‚   β”œβ”€β”€ hold=7             # Test set configuration
β”‚   β”‚   β”‚   β”œβ”€β”€ intrinsics.json    # Camera intrinsics
β”‚   β”‚   β”‚   β”œβ”€β”€ poses_bounds.npy   # Camera poses and scene bounds
β”‚   β”‚   β”‚   β”œβ”€β”€ train_test_split_3.json # Train/Test split definition
β”‚   β”‚   β”‚   └── transforms.json    # Coordinate transformation info
β”‚   β”‚   β”œβ”€β”€ 6views/            # 6-View Sub-set
β”‚   β”‚   └── 9views/            # 9-View Sub-set
β”‚   β”œβ”€β”€ 0002/
β”‚   β”œβ”€β”€ 0003/
β”‚   β”œβ”€β”€ 0004/
β”‚   └── 0005/
└── ...

Sample Usage

Installation

Tested with Python 3.10 and PyTorch 2.1.2 (CUDA 11.8). Adjust CUDA wheels as needed for your platform.

# (Optional) fresh conda env
conda create --name CoherentGS -y "python<3.11"
conda activate CoherentGS

# Install dependencies
pip install --upgrade pip setuptools
pip install "torch==2.1.2+cu118" "torchvision==0.16.2+cu118" --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

Data

Download DL3DV-Blur and related assets from this Hugging Face dataset. Place downloaded data under datasets/ (or adjust paths in the provided scripts).

Training

Train on DL3DV-Blur (full resolution) with:

bash run_dl3dv.sh

For custom settings, start from run.sh and tweak dataset paths, resolution, and batch sizes.

Citation

If CoherentGS supports your research, please cite:

@article{feng2025coherentgs,
  author    = {Feng, Chaoran and Xu, Zhankuo and Li, Yingtao and Zhao, Jianbin and Yang, Jiashu and Yu, Wangbo and Yuan, Li and Tian, Yonghong},
  title     = {Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views},
  year      = {2025},
}