Datasets:
File size: 2,551 Bytes
570cba2 2860c34 a4c1667 2860c34 aa90a84 2860c34 aa90a84 2860c34 4b065bc 2860c34 b25ffa1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
pretty_name: GenDS
tags:
- diffusion
- image-restoration
- computer-vision
license: mit
language:
- en
task_categories:
- text-to-image
size_categories:
- 100K<n<1M
---
# [CVPR-2025] GenDeg: Diffusion-based Degradation Synthesis for Generalizable All-In-One Image Restoration
# Dataset Card for GenDS dataset
<!-- Provide a quick summary of the dataset. -->
The **GenDS dataset** is a large dataset to boost the generalization of image restoration models. It is a combination of existing image restoration datasets and
diffusion-generated degraded samples from **GenDeg**.
---
## Usage
The dataset is fairly large at ~360GB. We recommend having at least 800GB of free space. To download the dataset, **git-lfs** is required.
### Download Instructions
```bash
# Install git lfs
git lfs install
# Clone the dataset repository
git clone https://huggingface.co/datasets/Sudarshan2002/GenDS.git
cd GenDS
# Pull the parts
git lfs pull
```
### Extract the Dataset:
```bash
# Combine and extract
cat GenDS_part_* > GenDS.tar.gz
tar -xzvf GenDS.tar.gz
```
After extraction, rename ```GenDSFull``` to ```GenDS```.
## Dataset Structure
The dataset includes:
- `train_gends.json`: Metadata for the training data
- `val_gends.json`: Metadata for the validation data
Each JSON file contains a list of dictionaries with the following fields:
```json
{
"image_path": "/relpath/to/image",
"target_path": "/relpath/to/ground_truth",
"dataset": "Source dataset name",
"degradation": "Original degradation type",
"category": "real | synthetic",
"degradation_sub_type": "GenDeg-generated degradation type OR 'Original' (if from existing dataset)",
"split": "train | val",
"mu": "mu value used in GenDeg",
"sigma": "sigma value used in GenDeg",
"random_sampled": true | false,
"sampled_dataset": "Dataset name if mu/sigma are not random"
}
```
### Example Usage:
```python
import json
# Load train metadata
with open("/path/to/train_gends.json") as f:
train_data = json.load(f)
print(train_data[0])
```
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
If you use **GenDS** in your work, please cite:
```bibtex
@article{rajagopalan2024gendeg,
title={GenDeg: Diffusion-Based Degradation Synthesis for Generalizable All-in-One Image Restoration},
author={Rajagopalan, Sudarshan and Nair, Nithin Gopalakrishnan and Paranjape, Jay N and Patel, Vishal M},
journal={arXiv preprint arXiv:2411.17687},
year={2024}
}
``` |