Datasets:
metadata
dataset_info:
features:
- name: Index
dtype: int32
- name: Question
dtype: string
- name: Options
dtype: string
- name: Correct Answer
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3145724
num_examples: 300
download_size: 3063257
dataset_size: 3145724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- visual-question-answering
tags:
- multimodal
- vision-language
- clip
- benchmark
MMVP (Multimodal Visual Patterns) Benchmark
This is a corrected version of the MMVP benchmark, re-hosted by lmms-lab-eval for use with lmms-eval.
Why this copy?
The original MMVP/MMVP dataset was uploaded in imagefolder format, which only exposes the image column. The text annotations (Question, Options, Correct Answer, Index) from the accompanying Questions.csv were not loaded into the dataset, making it unusable for evaluation.
This version reconstructs the complete dataset by merging the images with Questions.csv from the original repository.
Ground Truth Corrections
Based on verification in lmms-eval issue #1018 and the original MMVP issue #30, we found that two pairs of samples had their answers swapped. The corrections are applied directly in this dataset:
| Index | Question | Original GT | Corrected GT | Reason |
|---|---|---|---|---|
| 99 | Does the elephant have long or short tusks? | (a) Long | (b) Short | Image shows short tusks |
| 100 | Does the elephant have long or short tusks? | (b) Short | (a) Long | Image shows long tusks |
| 279 | Is the elderly person standing or sitting? | (a) Standing | (b) Sitting | Image shows person sitting on bench |
| 280 | Is the elderly person standing or sitting? | (b) Sitting | (a) Standing | Image shows person standing |
Dataset Structure
| Field | Type | Description |
|---|---|---|
Index |
int32 | 1-based sample index (1–300) |
Question |
string | The visual question |
Options |
string | Answer choices in format (a) ... (b) ... |
Correct Answer |
string | Ground truth: (a) or (b) |
image |
image | 224×224 RGB image |
- 300 samples organized as 150 pairs
- Each pair has the same question but opposite correct answers
- Tests 9 visual patterns: orientation, direction, color, counting, etc.
References
- Paper: Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
- Original Repository: https://github.com/tsb0601/MMVP
- Original Dataset: https://huggingface.co/datasets/MMVP/MMVP
- lmms-eval Task: https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks/mmvp
Citation
@inproceedings{tong2024eyes,
title={Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs},
author={Tong, Shengbang and Liu, Zhuang and Zhai, Yuexiang and Ma, Yi and LeCun, Yann and Xie, Saining},
booktitle={CVPR},
year={2024}
}