luodian commited on
Commit
09a45a5
·
verified ·
1 Parent(s): 84b2c6c

Add dataset card with ground truth correction documentation

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -22,4 +22,65 @@ configs:
22
  data_files:
23
  - split: train
24
  path: data/train-*
 
 
 
 
 
 
 
 
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
+ license: mit
26
+ task_categories:
27
+ - visual-question-answering
28
+ tags:
29
+ - multimodal
30
+ - vision-language
31
+ - clip
32
+ - benchmark
33
  ---
34
+
35
+ # MMVP (Multimodal Visual Patterns) Benchmark
36
+
37
+ This is a corrected version of the [MMVP benchmark](https://huggingface.co/datasets/MMVP/MMVP), re-hosted by [lmms-lab-eval](https://huggingface.co/lmms-lab-eval) for use with [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).
38
+
39
+ ## Why this copy?
40
+
41
+ The original `MMVP/MMVP` dataset was uploaded in `imagefolder` format, which only exposes the `image` column. The text annotations (`Question`, `Options`, `Correct Answer`, `Index`) from the accompanying `Questions.csv` were not loaded into the dataset, making it unusable for evaluation.
42
+
43
+ This version reconstructs the complete dataset by merging the images with `Questions.csv` from the original repository.
44
+
45
+ ## Ground Truth Corrections
46
+
47
+ Based on verification in [lmms-eval issue #1018](https://github.com/EvolvingLMMs-Lab/lmms-eval/issues/1018) and the [original MMVP issue #30](https://github.com/tsb0601/MMVP/issues/30), we found that two pairs of samples had their answers swapped. The corrections are applied directly in this dataset:
48
+
49
+ | Index | Question | Original GT | Corrected GT | Reason |
50
+ |:-----:|:---------|:-----------:|:------------:|:-------|
51
+ | 99 | Does the elephant have long or short tusks? | (a) Long | **(b) Short** | Image shows short tusks |
52
+ | 100 | Does the elephant have long or short tusks? | (b) Short | **(a) Long** | Image shows long tusks |
53
+ | 279 | Is the elderly person standing or sitting? | (a) Standing | **(b) Sitting** | Image shows person sitting on bench |
54
+ | 280 | Is the elderly person standing or sitting? | (b) Sitting | **(a) Standing** | Image shows person standing |
55
+
56
+ ## Dataset Structure
57
+
58
+ | Field | Type | Description |
59
+ |-------|------|-------------|
60
+ | `Index` | int32 | 1-based sample index (1–300) |
61
+ | `Question` | string | The visual question |
62
+ | `Options` | string | Answer choices in format `(a) ... (b) ...` |
63
+ | `Correct Answer` | string | Ground truth: `(a)` or `(b)` |
64
+ | `image` | image | 224×224 RGB image |
65
+
66
+ - **300 samples** organized as **150 pairs**
67
+ - Each pair has the same question but opposite correct answers
68
+ - Tests 9 visual patterns: orientation, direction, color, counting, etc.
69
+
70
+ ## References
71
+
72
+ - **Paper**: [Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs](https://arxiv.org/abs/2401.06209)
73
+ - **Original Repository**: https://github.com/tsb0601/MMVP
74
+ - **Original Dataset**: https://huggingface.co/datasets/MMVP/MMVP
75
+ - **lmms-eval Task**: https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks/mmvp
76
+
77
+ ## Citation
78
+
79
+ ```bibtex
80
+ @inproceedings{tong2024eyes,
81
+ title={Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs},
82
+ author={Tong, Shengbang and Liu, Zhuang and Zhai, Yuexiang and Ma, Yi and LeCun, Yann and Xie, Saining},
83
+ booktitle={CVPR},
84
+ year={2024}
85
+ }
86
+ ```