Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,3 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
# Generative-VQA-V2-Curated
|
| 4 |
|
| 5 |
A curated, balanced, and cleaned version of the VQA v2 dataset specifically optimized for **Generative Visual Question Answering**.
|
|
@@ -22,7 +20,7 @@ The primary goal of this curated set is to provide a "clean" signal for training
|
|
| 22 |
* **Minimum Frequency per Answer:** 20
|
| 23 |
* **Maximum Samples per Answer:** 600
|
| 24 |
|
| 25 |
-
##
|
| 26 |
|
| 27 |
The dataset was generated using the following filtering pipeline:
|
| 28 |
|
|
@@ -80,12 +78,47 @@ This dataset is a derivative work of the **VQA v2 Dataset** and the **COCO Datas
|
|
| 80 |
* **Images:** [COCO Consortium (CC BY 4.0)](https://www.google.com/search?q=https://cocodataset.org/%23termsofuse)
|
| 81 |
* **Annotations:** [VQA v2 (CC BY 4.0)](https://visualqa.org/download.html)
|
| 82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
license: mit
|
| 84 |
task_categories:
|
| 85 |
- visual-question-answering
|
| 86 |
tags:
|
| 87 |
-
-
|
| 88 |
-
-
|
| 89 |
-
|
|
|
|
| 90 |
size_categories:
|
| 91 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Generative-VQA-V2-Curated
|
| 2 |
|
| 3 |
A curated, balanced, and cleaned version of the VQA v2 dataset specifically optimized for **Generative Visual Question Answering**.
|
|
|
|
| 20 |
* **Minimum Frequency per Answer:** 20
|
| 21 |
* **Maximum Samples per Answer:** 600
|
| 22 |
|
| 23 |
+
## Curation Logic
|
| 24 |
|
| 25 |
The dataset was generated using the following filtering pipeline:
|
| 26 |
|
|
|
|
| 78 |
* **Images:** [COCO Consortium (CC BY 4.0)](https://www.google.com/search?q=https://cocodataset.org/%23termsofuse)
|
| 79 |
* **Annotations:** [VQA v2 (CC BY 4.0)](https://visualqa.org/download.html)
|
| 80 |
|
| 81 |
+
## Citation
|
| 82 |
+
|
| 83 |
+
If you use this dataset in your research or project, please cite it as follows:
|
| 84 |
+
|
| 85 |
+
```bibtex
|
| 86 |
+
@misc{devarajan_genvqa_2026,
|
| 87 |
+
author = {Devarajan},
|
| 88 |
+
title = {Generative-VQA-V2-Curated: A Balanced Dataset for Open-Ended VQA},
|
| 89 |
+
year = {2026},
|
| 90 |
+
publisher = {Hugging Face},
|
| 91 |
+
howpublished = {\url{[https://huggingface.co/datasets/Deva8/Generative-VQA-V2-Curated](https://huggingface.co/datasets/Deva8/Generative-VQA-V2-Curated)}}
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
license: mit
|
| 97 |
task_categories:
|
| 98 |
- visual-question-answering
|
| 99 |
tags:
|
| 100 |
+
- generative-vqa
|
| 101 |
+
- multimodal
|
| 102 |
+
- vqa-v2
|
| 103 |
+
pretty_name: Generative-VQA-V2 (Curated)
|
| 104 |
size_categories:
|
| 105 |
+
- 100K<n<1M
|
| 106 |
+
configs:
|
| 107 |
+
- data_files:
|
| 108 |
+
- split: train
|
| 109 |
+
path:
|
| 110 |
+
- "metadata.csv"
|
| 111 |
+
- "gen_vqa_v2-images.zip"
|
| 112 |
+
dataset_info:
|
| 113 |
+
features:
|
| 114 |
+
- name: image_id
|
| 115 |
+
dtype: int64
|
| 116 |
+
- name: question_id
|
| 117 |
+
dtype: int64
|
| 118 |
+
- name: question
|
| 119 |
+
dtype: string
|
| 120 |
+
- name: answer
|
| 121 |
+
dtype: string
|
| 122 |
+
- name: image_path
|
| 123 |
+
dtype: image
|
| 124 |
+
---
|