Datasets:
File size: 3,337 Bytes
ceadf90 84b2c6c ceadf90 84b2c6c ceadf90 09a45a5 ceadf90 09a45a5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | ---
dataset_info:
features:
- name: Index
dtype: int32
- name: Question
dtype: string
- name: Options
dtype: string
- name: Correct Answer
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3145724
num_examples: 300
download_size: 3063257
dataset_size: 3145724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- visual-question-answering
tags:
- multimodal
- vision-language
- clip
- benchmark
---
# MMVP (Multimodal Visual Patterns) Benchmark
This is a corrected version of the [MMVP benchmark](https://huggingface.co/datasets/MMVP/MMVP), re-hosted by [lmms-lab-eval](https://huggingface.co/lmms-lab-eval) for use with [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).
## Why this copy?
The original `MMVP/MMVP` dataset was uploaded in `imagefolder` format, which only exposes the `image` column. The text annotations (`Question`, `Options`, `Correct Answer`, `Index`) from the accompanying `Questions.csv` were not loaded into the dataset, making it unusable for evaluation.
This version reconstructs the complete dataset by merging the images with `Questions.csv` from the original repository.
## Ground Truth Corrections
Based on verification in [lmms-eval issue #1018](https://github.com/EvolvingLMMs-Lab/lmms-eval/issues/1018) and the [original MMVP issue #30](https://github.com/tsb0601/MMVP/issues/30), we found that two pairs of samples had their answers swapped. The corrections are applied directly in this dataset:
| Index | Question | Original GT | Corrected GT | Reason |
|:-----:|:---------|:-----------:|:------------:|:-------|
| 99 | Does the elephant have long or short tusks? | (a) Long | **(b) Short** | Image shows short tusks |
| 100 | Does the elephant have long or short tusks? | (b) Short | **(a) Long** | Image shows long tusks |
| 279 | Is the elderly person standing or sitting? | (a) Standing | **(b) Sitting** | Image shows person sitting on bench |
| 280 | Is the elderly person standing or sitting? | (b) Sitting | **(a) Standing** | Image shows person standing |
## Dataset Structure
| Field | Type | Description |
|-------|------|-------------|
| `Index` | int32 | 1-based sample index (1–300) |
| `Question` | string | The visual question |
| `Options` | string | Answer choices in format `(a) ... (b) ...` |
| `Correct Answer` | string | Ground truth: `(a)` or `(b)` |
| `image` | image | 224×224 RGB image |
- **300 samples** organized as **150 pairs**
- Each pair has the same question but opposite correct answers
- Tests 9 visual patterns: orientation, direction, color, counting, etc.
## References
- **Paper**: [Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs](https://arxiv.org/abs/2401.06209)
- **Original Repository**: https://github.com/tsb0601/MMVP
- **Original Dataset**: https://huggingface.co/datasets/MMVP/MMVP
- **lmms-eval Task**: https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks/mmvp
## Citation
```bibtex
@inproceedings{tong2024eyes,
title={Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs},
author={Tong, Shengbang and Liu, Zhuang and Zhai, Yuexiang and Ma, Yi and LeCun, Yann and Xie, Saining},
booktitle={CVPR},
year={2024}
}
```
|