FGVQA / README.md
dmarsili's picture
Update README.md
de8b781 verified
---
dataset_info:
features:
- name: dataset_source
dtype: string
- name: pair_id
dtype: string
- name: sample_id
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: images
list: image
- name: extra
dtype: string
splits:
- name: test
num_bytes: 1222676592
num_examples: 12000
download_size: 4193904901
dataset_size: 1222676592
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
tags:
- finegrained-vqa
- vqa
- visual-reasoning
pretty_name: FGVQA
size_categories:
- 10K<n<100K
---
# FGVQA
This repository contains the FGVQA benchmark suite introduced in the paper [Same or Not? Enhancing Visual Perception in Vision-Language Models](https://glab-caltech.github.io/twin).FGVQA contains 12,000 challenging (image, question, answer) tuples emphasizing fine-grained image understanding.
The benchmark suite is composed of six sub-benchmarks:
1) [TWIN-eval](https://glab-caltech.github.io/twin/)
2) [ILIAS](https://vrg.fel.cvut.cz/ilias/)
3) [Google Landmarks v2](https://github.com/cvdfoundation/google-landmark)
4) [MET](https://cmp.felk.cvut.cz/met/)
5) [CUB](https://www.vision.caltech.edu/datasets/cub_200_2011/)
6) [Inquire](https://inquire-benchmark.github.io/)
For evaluating on the dataset with LMMS-eval, please refer to this [repo](https://github.com/damianomarsili/TWIN).
## Citation
If you use the FGVQA benchmark suite in your research, please use the following BibTeX entry.
```
@misc{marsili2025notenhancingvisualperception,
title={Same or Not? Enhancing Visual Perception in Vision-Language Models},
author={Damiano Marsili and Aditya Mehta and Ryan Y. Lin and Georgia Gkioxari},
year={2025},
eprint={2512.23592},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.23592},
}
```