FGVQA / README.md
dmarsili's picture
Update README.md
de8b781 verified
metadata
dataset_info:
  features:
    - name: dataset_source
      dtype: string
    - name: pair_id
      dtype: string
    - name: sample_id
      dtype: string
    - name: question_type
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: images
      list: image
    - name: extra
      dtype: string
  splits:
    - name: test
      num_bytes: 1222676592
      num_examples: 12000
  download_size: 4193904901
  dataset_size: 1222676592
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: cc-by-nc-4.0
task_categories:
  - visual-question-answering
tags:
  - finegrained-vqa
  - vqa
  - visual-reasoning
pretty_name: FGVQA
size_categories:
  - 10K<n<100K

FGVQA

This repository contains the FGVQA benchmark suite introduced in the paper Same or Not? Enhancing Visual Perception in Vision-Language Models.FGVQA contains 12,000 challenging (image, question, answer) tuples emphasizing fine-grained image understanding.

The benchmark suite is composed of six sub-benchmarks:

  1. TWIN-eval
  2. ILIAS
  3. Google Landmarks v2
  4. MET
  5. CUB
  6. Inquire

For evaluating on the dataset with LMMS-eval, please refer to this repo.

Citation

If you use the FGVQA benchmark suite in your research, please use the following BibTeX entry.

@misc{marsili2025notenhancingvisualperception,
      title={Same or Not? Enhancing Visual Perception in Vision-Language Models}, 
      author={Damiano Marsili and Aditya Mehta and Ryan Y. Lin and Georgia Gkioxari},
      year={2025},
      eprint={2512.23592},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.23592}, 
}