Datasets:
license: mit
task_categories:
- visual-question-answering
tags:
- generative-vqa
- multimodal
- vqa-v2
- coco
- question-answering
pretty_name: Generative-VQA-V2 (Curated)
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: full
path: main_metadata.csv
Generative-VQA-V2-Curated
A curated, balanced, and cleaned version of the VQA v2 dataset specifically optimized for Generative Visual Question Answering.
This dataset transforms the standard VQA task into a generative challenge by removing "yes/no" shortcuts and balancing answer distributions to prevent model over-fitting on dominant classes.
Dataset Summary
The primary goal of this curated set is to provide a "clean" signal for training multimodal models by:
- Eliminating Binary Biases: Removed all "yes/no" and "unknown" style answers
- Balancing Classes: Capped samples at 600 per answer to ensure the model learns a diverse vocabulary
- Filtering Ambiguity: Removed generic questions (e.g., "What is this?") to focus on specific visual grounding
Dataset Statistics
- Total QA Pairs: 135,268
- Unique Answer Classes: 1,251
- Source Images: COCO Train 2014
- Minimum Frequency per Answer: 20
- Maximum Samples per Answer: 600
- Average Question Length: ~6 words
- Average Answer Length: ~1.5 words
Curation Logic
The dataset was generated using the following filtering pipeline:
- Consensus-Based: Only the majority-vote answer from the 10 human annotators is used
- Exclusion List:
- Boolean answers:
yes,no - Uncertainty markers:
unknown,none,n/a,cant tell,not sure
- Boolean answers:
- Ambiguity Filter: Removed questions containing:
- "what is in the image"
- "what is this"
- "what is that"
- "what do you see"
- Conciseness: Answers are restricted to ≤5 words and ≤30 characters
Repository Structure
Deva8/Generative-VQA-V2-Curated/
├── main_metadata.csv # ⭐ Primary data file (17 MB)
├── gen_vqa_v2-images.zip # 📦 Images archive (10.1 GB)
└── README.md
Inside gen_vqa_v2-images.zip:
gen_vqa_v2-images.zip (10.1 GB)
└── gen_vqa_v2-images/
└── gen_vqa_v22/
└── images/
├── COCO_train2014_000000004702.jpg
├── COCO_train2014_000000012460.jpg
├── COCO_train2014_000000183672.jpg
└── ... (135,268 images total)
Note: The zip also contains metadata.csv and qa_pairs.json files which are not used by this dataset. Please use main_metadata.csv at the repository root instead.
Download Instructions
Option 1: Using huggingface_hub (Recommended)
from huggingface_hub import hf_hub_download
import zipfile
import os
# Download the images zip file (10.1 GB - will be cached)
zip_path = hf_hub_download(
repo_id="Deva8/Generative-VQA-V2-Curated",
filename="gen_vqa_v2-images.zip",
repo_type="dataset"
)
# Extract to a directory
extract_dir = "./gen_vqa_images"
os.makedirs(extract_dir, exist_ok=True)
print(f"Extracting {zip_path}...")
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(extract_dir)
print(f"✓ Images extracted to: {extract_dir}")
print(f"✓ Found {len([f for f in os.listdir(os.path.join(extract_dir, 'gen_vqa_v2-images/gen_vqa_v22/images')) if f.endswith('.jpg')])} images")
Option 2: Manual Download
- Go to: https://huggingface.co/datasets/Deva8/Generative-VQA-V2-Curated/tree/main
- Click on
gen_vqa_v2-images.zip(10.1 GB) - Click the download button
- Extract the zip file to your working directory
🔧 Metadata Fields
The dataset viewer above shows the metadata CSV with the following columns:
| Field | Type | Description |
|---|---|---|
image_id |
int64 | Original COCO Image ID |
question_id |
int64 | Original VQA v2 Question ID |
question |
string | Natural language question about the image |
answer |
string | Curated ground-truth answer |
file_name |
string | Relative path to image file |
Example Rows:
image_id,question_id,question,answer,file_name
429568,429568000,What is behind the street sign?,tree,gen_vqa_v2-images/gen_vqa_v22/images/COCO_train2014_000000429568.jpg
4702,4702000,What is on the man's head?,soccer ball,gen_vqa_v2-images/gen_vqa_v22/images/COCO_train2014_000000004702.jpg
183672,183672001,How old is the man?,20,gen_vqa_v2-images/gen_vqa_v22/images/COCO_train2014_000000183672.jpg
📜 License & Attribution
This dataset is a derivative work of:
All derivative work is released under the same MIT License.
Original Papers:
@inproceedings{goyal2017making,
title={Making the v in vqa matter: Elevating the role of image understanding in visual question answering},
author={Goyal, Yash and Khot, Tejas and Summers-Stay, Douglas and Batra, Dhruv and Parikh, Devi},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={6904--6913},
year={2017}
}
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={European conference on computer vision},
pages={740--755},
year={2014},
organization={Springer}
}
📖 Citation
If you use this dataset in your research or project, please cite:
@misc{devarajan_genvqa_2026,
author = {Devarajan},
title = {Generative-VQA-V2-Curated: A Balanced Dataset for Open-Ended Generative VQA},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/Deva8/Generative-VQA-V2-Curated}}
}
🤝 Contributing
Found an issue or have suggestions? Please open a discussion on the HuggingFace dataset page!