Datasets:
Tasks:
Visual Question Answering
Formats:
parquet
Languages:
English
Size:
100K - 1M
ArXiv:
License:
File size: 1,657 Bytes
dcdf4f0 fac2cd0 dcdf4f0 73726b5 dcdf4f0 73726b5 dcdf4f0 d14b9ab dcdf4f0 361cb65 9778e1d 361cb65 c22c472 d14b9ab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
dataset_info:
features:
- name: pair_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_1
dtype: string
- name: image_2
dtype: string
- name: idx
dtype: string
- name: supercategory
dtype: string
- name: category
dtype: string
- name: type
dtype: string
- name: source_json
dtype: string
splits:
- name: train
num_bytes: 365770574
num_examples: 561569
download_size: 14528564
dataset_size: 365770574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- visual-question-answering
language:
- en
tags:
- finegrained
- finegrained-vqa
pretty_name: TWIN
size_categories:
- 100K<n<1M
---
# TWIN
This repository contains the TWIN dataset introduced in the paper [Same or Not? Enhancing Visual Perception in Vision-Language Models](https://glab-caltech.github.io/twin). TWIN contains 561K challenging (image, question, answer) tuples emphasizing fine-grained image understanding.
For evaluating on the dataset with LMMS-eval, please refer to this [repo](https://github.com/damianomarsili/lmms-eval).
## Citation
If you use the TWIN dataset in your research, please use the following BibTeX entry.
```
@misc{marsili2025notenhancingvisualperception,
title={Same or Not? Enhancing Visual Perception in Vision-Language Models},
author={Damiano Marsili and Aditya Mehta and Ryan Y. Lin and Georgia Gkioxari},
year={2025},
eprint={2512.23592},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.23592},
}
``` |