Datasets:
File size: 1,997 Bytes
34c053d 4292c5a 1867a89 4292c5a 34c053d 4292c5a 8bf6050 34c053d 8bf6050 60add07 8bf6050 5cfde88 d04d6cd 8bf6050 de8b781 8bf6050 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
dataset_info:
features:
- name: dataset_source
dtype: string
- name: pair_id
dtype: string
- name: sample_id
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: images
list: image
- name: extra
dtype: string
splits:
- name: test
num_bytes: 1222676592
num_examples: 12000
download_size: 4193904901
dataset_size: 1222676592
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
tags:
- finegrained-vqa
- vqa
- visual-reasoning
pretty_name: FGVQA
size_categories:
- 10K<n<100K
---
# FGVQA
This repository contains the FGVQA benchmark suite introduced in the paper [Same or Not? Enhancing Visual Perception in Vision-Language Models](https://glab-caltech.github.io/twin).FGVQA contains 12,000 challenging (image, question, answer) tuples emphasizing fine-grained image understanding.
The benchmark suite is composed of six sub-benchmarks:
1) [TWIN-eval](https://glab-caltech.github.io/twin/)
2) [ILIAS](https://vrg.fel.cvut.cz/ilias/)
3) [Google Landmarks v2](https://github.com/cvdfoundation/google-landmark)
4) [MET](https://cmp.felk.cvut.cz/met/)
5) [CUB](https://www.vision.caltech.edu/datasets/cub_200_2011/)
6) [Inquire](https://inquire-benchmark.github.io/)
For evaluating on the dataset with LMMS-eval, please refer to this [repo](https://github.com/damianomarsili/TWIN).
## Citation
If you use the FGVQA benchmark suite in your research, please use the following BibTeX entry.
```
@misc{marsili2025notenhancingvisualperception,
title={Same or Not? Enhancing Visual Perception in Vision-Language Models},
author={Damiano Marsili and Aditya Mehta and Ryan Y. Lin and Georgia Gkioxari},
year={2025},
eprint={2512.23592},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.23592},
}
``` |