Datasets:
File size: 6,198 Bytes
1a82edb 46123eb 1a82edb 46123eb 4b8d639 1a82edb 46123eb 4b8d639 1a82edb 0b4e600 c95d978 4b8d639 c95d978 4b8d639 c95d978 46123eb 4b8d639 c95d978 900affe c95d978 4b8d639 46123eb 4b8d639 c95d978 46123eb c95d978 46123eb 4b8d639 c95d978 4b8d639 c95d978 4b8d639 c95d978 4b8d639 c95d978 4b8d639 f8804ef 4b8d639 f8804ef 4b8d639 46123eb c95d978 4b8d639 c95d978 4b8d639 46123eb 4b8d639 c95d978 4b8d639 46123eb 4b8d639 46123eb 4b8d639 c95d978 4b8d639 c95d978 4b8d639 c95d978 4b8d639 900affe 1a82edb 900affe 1a82edb 900affe 4b8d639 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 |
---
license: mit
task_categories:
- visual-question-answering
tags:
- generative-vqa
- multimodal
- vqa-v2
- coco
- question-answering
pretty_name: Generative-VQA-V2 (Curated)
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: full
path: main_metadata.csv
---
# Generative-VQA-V2-Curated
A curated, balanced, and cleaned version of the VQA v2 dataset specifically optimized for **Generative Visual Question Answering**.
This dataset transforms the standard VQA task into a generative challenge by removing "yes/no" shortcuts and balancing answer distributions to prevent model over-fitting on dominant classes.
## Dataset Summary
The primary goal of this curated set is to provide a "clean" signal for training multimodal models by:
- **Eliminating Binary Biases**: Removed all "yes/no" and "unknown" style answers
- **Balancing Classes**: Capped samples at 600 per answer to ensure the model learns a diverse vocabulary
- **Filtering Ambiguity**: Removed generic questions (e.g., "What is this?") to focus on specific visual grounding
## Dataset Statistics
- **Total QA Pairs**: 135,268
- **Unique Answer Classes**: 1,251
- **Source Images**: COCO Train 2014
- **Minimum Frequency per Answer**: 20
- **Maximum Samples per Answer**: 600
- **Average Question Length**: ~6 words
- **Average Answer Length**: ~1.5 words
## Curation Logic
The dataset was generated using the following filtering pipeline:
1. **Consensus-Based**: Only the majority-vote answer from the 10 human annotators is used
2. **Exclusion List**:
- Boolean answers: `yes`, `no`
- Uncertainty markers: `unknown`, `none`, `n/a`, `cant tell`, `not sure`
3. **Ambiguity Filter**: Removed questions containing:
- "what is in the image"
- "what is this"
- "what is that"
- "what do you see"
4. **Conciseness**: Answers are restricted to ≤5 words and ≤30 characters
## Repository Structure
```
Deva8/Generative-VQA-V2-Curated/
├── main_metadata.csv # ⭐ Primary data file (17 MB)
├── gen_vqa_v2-images.zip # 📦 Images archive (10.1 GB)
└── README.md
```
### Inside `gen_vqa_v2-images.zip`:
```
gen_vqa_v2-images.zip (10.1 GB)
└── gen_vqa_v2-images/
└── gen_vqa_v22/
└── images/
├── COCO_train2014_000000004702.jpg
├── COCO_train2014_000000012460.jpg
├── COCO_train2014_000000183672.jpg
└── ... (135,268 images total)
```
**Note**: The zip also contains `metadata.csv` and `qa_pairs.json` files which are **not used** by this dataset. Please use `main_metadata.csv` at the repository root instead.
## Download Instructions
### Option 1: Using `huggingface_hub` (Recommended)
```python
from huggingface_hub import hf_hub_download
import zipfile
import os
# Download the images zip file (10.1 GB - will be cached)
zip_path = hf_hub_download(
repo_id="Deva8/Generative-VQA-V2-Curated",
filename="gen_vqa_v2-images.zip",
repo_type="dataset"
)
# Extract to a directory
extract_dir = "./gen_vqa_images"
os.makedirs(extract_dir, exist_ok=True)
print(f"Extracting {zip_path}...")
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(extract_dir)
print(f"✓ Images extracted to: {extract_dir}")
print(f"✓ Found {len([f for f in os.listdir(os.path.join(extract_dir, 'gen_vqa_v2-images/gen_vqa_v22/images')) if f.endswith('.jpg')])} images")
```
### Option 2: Manual Download
1. Go to: https://huggingface.co/datasets/Deva8/Generative-VQA-V2-Curated/tree/main
2. Click on `gen_vqa_v2-images.zip` (10.1 GB)
3. Click the download button
4. Extract the zip file to your working directory
## 🔧 Metadata Fields
The dataset viewer above shows the metadata CSV with the following columns:
| Field | Type | Description |
|-------|------|-------------|
| `image_id` | int64 | Original COCO Image ID |
| `question_id` | int64 | Original VQA v2 Question ID |
| `question` | string | Natural language question about the image |
| `answer` | string | Curated ground-truth answer |
| `file_name` | string | Relative path to image file |
### Example Rows:
```csv
image_id,question_id,question,answer,file_name
429568,429568000,What is behind the street sign?,tree,gen_vqa_v2-images/gen_vqa_v22/images/COCO_train2014_000000429568.jpg
4702,4702000,What is on the man's head?,soccer ball,gen_vqa_v2-images/gen_vqa_v22/images/COCO_train2014_000000004702.jpg
183672,183672001,How old is the man?,20,gen_vqa_v2-images/gen_vqa_v22/images/COCO_train2014_000000183672.jpg
```
## 📜 License & Attribution
This dataset is a derivative work of:
- **VQA v2 Dataset** (Goyal et al., 2017) - [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **COCO Dataset** (Lin et al., 2014) - [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
All derivative work is released under the same **MIT License**.
### Original Papers:
```bibtex
@inproceedings{goyal2017making,
title={Making the v in vqa matter: Elevating the role of image understanding in visual question answering},
author={Goyal, Yash and Khot, Tejas and Summers-Stay, Douglas and Batra, Dhruv and Parikh, Devi},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={6904--6913},
year={2017}
}
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={European conference on computer vision},
pages={740--755},
year={2014},
organization={Springer}
}
```
## 📖 Citation
If you use this dataset in your research or project, please cite:
```bibtex
@misc{devarajan_genvqa_2026,
author = {Devarajan},
title = {Generative-VQA-V2-Curated: A Balanced Dataset for Open-Ended Generative VQA},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/Deva8/Generative-VQA-V2-Curated}}
}
```
## 🤝 Contributing
Found an issue or have suggestions? Please open a discussion on the HuggingFace dataset page! |