Improve dataset card: Add task categories, paper, code, project page, description, image, sample usage, and citation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +55 -0
README.md CHANGED
@@ -1,9 +1,27 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  # CoherentGS-DL3DV-Blur Dataset
6
 
 
 
 
 
 
 
 
 
 
 
7
  ## Motivation πŸ’‘
8
 
9
  To rigorously assess the generalization capability of **CoherentGS** in complex, unconstrained outdoor environments, we establish a new benchmark named **DL3DV-Blur**. This benchmark is derived from five diverse scenes within the DL3DV-10K dataset.
@@ -48,3 +66,40 @@ dl3dv/
48
  β”‚ β”œβ”€β”€ 0004/
49
  β”‚ └── 0005/
50
  └── ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - image-to-3d
5
+ tags:
6
+ - 3d-gaussian-splatting
7
+ - novel-view-synthesis
8
+ - deblurring
9
+ - sparse-views
10
+ - 3d-reconstruction
11
  ---
12
 
13
  # CoherentGS-DL3DV-Blur Dataset
14
 
15
+ CoherentGS tackles one of the hardest regimes for 3D Gaussian Splatting (3DGS): Sparse inputs with severe motion blur. We break the "vicious cycle" between missing viewpoints and degraded photometry by coupling a physics-aware deblurring prior with diffusion-driven geometry completion, enabling coherent, high-frequency reconstructions from as few as 3–9 views on both synthetic and real scenes.
16
+
17
+ **Paper:** [Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views](https://huggingface.co/papers/2512.10369)
18
+ **Project Page:** https://potatobigroom.github.io/CoherentGS/
19
+ **Code:** https://github.com/PotatoBigRoom/CoherentGS
20
+
21
+ <p align="center">
22
+ <img src="https://github.com/PotatoBigRoom/CoherentGS/blob/main/docs/static/images/pipeline.jpg" alt="CoherentGS overview" width="90%">
23
+ </p>
24
+
25
  ## Motivation πŸ’‘
26
 
27
  To rigorously assess the generalization capability of **CoherentGS** in complex, unconstrained outdoor environments, we establish a new benchmark named **DL3DV-Blur**. This benchmark is derived from five diverse scenes within the DL3DV-10K dataset.
 
66
  β”‚ β”œβ”€β”€ 0004/
67
  β”‚ └── 0005/
68
  └── ...
69
+ ```
70
+
71
+ ## Sample Usage
72
+ ### Installation
73
+ Tested with Python 3.10 and PyTorch 2.1.2 (CUDA 11.8). Adjust CUDA wheels as needed for your platform.
74
+
75
+ ```bash
76
+ # (Optional) fresh conda env
77
+ conda create --name CoherentGS -y "python<3.11"
78
+ conda activate CoherentGS
79
+
80
+ # Install dependencies
81
+ pip install --upgrade pip setuptools
82
+ pip install "torch==2.1.2+cu118" "torchvision==0.16.2+cu118" --extra-index-url https://download.pytorch.org/whl/cu118
83
+ pip install -r requirements.txt
84
+ ```
85
+
86
+ ### Data
87
+ Download DL3DV-Blur and related assets from this Hugging Face dataset.
88
+ Place downloaded data under `datasets/` (or adjust paths in the provided scripts).
89
+
90
+ ### Training
91
+ Train on DL3DV-Blur (full resolution) with:
92
+ ```bash
93
+ bash run_dl3dv.sh
94
+ ```
95
+ For custom settings, start from `run.sh` and tweak dataset paths, resolution, and batch sizes.
96
+
97
+ ## Citation
98
+ If CoherentGS supports your research, please cite:
99
+ ```bibtex
100
+ @article{feng2025coherentgs,
101
+ author = {Feng, Chaoran and Xu, Zhankuo and Li, Yingtao and Zhao, Jianbin and Yang, Jiashu and Yu, Wangbo and Yuan, Li and Tian, Yonghong},
102
+ title = {Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views},
103
+ year = {2025},
104
+ }
105
+ ```