Improve dataset card: Add paper/code/project links, examples, license, citation & update task category

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +60 -7
README.md CHANGED
@@ -1,4 +1,10 @@
1
  ---
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: image
@@ -28,21 +34,50 @@ configs:
28
  data_files:
29
  - split: test
30
  path: data/test-*
31
- license: cc-by-sa-4.0
32
- task_categories:
33
- - visual-question-answering
34
- language:
35
- - en
36
  tags:
37
  - art
38
- pretty_name: VisualOverload
39
  ---
 
40
  # VisualOverload
41
  <p align="center">
42
  <img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
43
  </p>
 
 
 
44
  Is basic visual understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free vision tasks in densely populated (or, overloaded) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. We manually annotated these images with questions across six task categories to probe for a thorough understanding of the scene. We hypothesize that current benchmarks overestimate the performance of VLMs, and encoding and reasoning over details is still a challenging task for them, especially if they are confronted with densely populated scenes. Indeed, we observe that even the best model (o3) out of 37 tested models only achieves 19.6% accuracy on our hardest test split and overall 69.5% accuracy on all questions. Beyond a thorough evaluation, we complement our benchmark with an error analysis that reveals multiple failure modes, including a lack of counting skills, failure in OCR, and striking logical inconsistencies under complex tasks. Altogether, VisualOverload exposes a critical gap in current vision models and offers a crucial resource for the community to develop better models.
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ## 📂 Load the dataset
48
 
@@ -82,4 +117,22 @@ Example:
82
  ]
83
  ```
84
  ## 🏆 Submit to the leaderboard
85
- We welcome all submissions for model *or* method (including prompting-based) to our dataset. Please create a [GitHub issue](https://github.com/paulgavrikov/visualoverload/issues) following the template and include your predictions as JSON.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-sa-4.0
5
+ task_categories:
6
+ - image-text-to-text
7
+ pretty_name: VisualOverload
8
  dataset_info:
9
  features:
10
  - name: image
 
34
  data_files:
35
  - split: test
36
  path: data/test-*
 
 
 
 
 
37
  tags:
38
  - art
 
39
  ---
40
+
41
  # VisualOverload
42
  <p align="center">
43
  <img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
44
  </p>
45
+
46
+ [📚 Paper](https://huggingface.co/papers/2509.25339) | [💻 Code](https://github.com/paulgavrikov/visualoverload) | [🌐 Project Page](https://paulgavrikov.github.io/visualoverload/) | [🏆 Leaderboard](https://huggingface.co/spaces/paulgavrikov/visualoverload-submit) | [🎯 Online Evaluator](https://huggingface.co/spaces/paulgavrikov/visualoverload-submit)
47
+
48
  Is basic visual understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free vision tasks in densely populated (or, overloaded) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. We manually annotated these images with questions across six task categories to probe for a thorough understanding of the scene. We hypothesize that current benchmarks overestimate the performance of VLMs, and encoding and reasoning over details is still a challenging task for them, especially if they are confronted with densely populated scenes. Indeed, we observe that even the best model (o3) out of 37 tested models only achieves 19.6% accuracy on our hardest test split and overall 69.5% accuracy on all questions. Beyond a thorough evaluation, we complement our benchmark with an error analysis that reveals multiple failure modes, including a lack of counting skills, failure in OCR, and striking logical inconsistencies under complex tasks. Altogether, VisualOverload exposes a critical gap in current vision models and offers a crucial resource for the community to develop better models.
49
 
50
+ #### Examples
51
+
52
+ <table align="left">
53
+ <tr>
54
+ <td>Image:</td>
55
+ <td><img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/1.jpg?raw=true" width="400"></td>
56
+ <td><img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/2.jpg?raw=true" width="400"></td>
57
+ <td><img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/3.jpg?raw=true" width="400"></td>
58
+ </tr>
59
+ <tr align="left">
60
+ <td>Task:</td>
61
+ <td><b>Reasoning<b></td>
62
+ <td><b>OCR<b></td>
63
+ <td><b>Counting<b></td>
64
+ </tr>
65
+ <tr align="left">
66
+ <td>Question:</td>
67
+ <td>Depending on the shadow of the people, what is the most likely position of the sun?</td>
68
+ <td>What is the ninth word of the caption below the image?</td>
69
+ <td>How many live animals can be seen?</td>
70
+ </tr>
71
+ <tr align="left">
72
+ <td>Options:</td>
73
+ <td>A. behind the right building,
74
+ B. behind the left building,
75
+ C. its night time,
76
+ D. behind the middle tower</td>
77
+ <td>(freeform)</td>
78
+ <td>(freeform)</td>
79
+ </tr>
80
+ </table>
81
 
82
  ## 📂 Load the dataset
83
 
 
117
  ]
118
  ```
119
  ## 🏆 Submit to the leaderboard
120
+ We welcome all submissions for model *or* method (including prompting-based) to our dataset. Please create a [GitHub issue](https://github.com/paulgavrikov/visualoverload/issues) following the template and include your predictions as JSON.
121
+
122
+ ## 📝 License
123
+
124
+ Our dataset is licensed under CC BY-SA 4.0. All images are based on artwork that is royalty-free public domain (CC0).
125
+
126
+ ## 📚 Citation
127
+
128
+ ```latex
129
+ @misc{gavrikov2025visualoverload,
130
+ title={VisualOverload: Probing Visual Understanding of VLMs in Really Dense Scenes},
131
+ author={Paul Gavrikov and Wei Lin and M. Jehanzeb Mirza and Soumya Jahagirdar and Muhammad Huzaifa and Sivan Doveh and Serena Yeung-Levy and James Glass and Hilde Kuehne},
132
+ year={2025},
133
+ eprint={2509.25339},
134
+ archivePrefix={arXiv},
135
+ primaryClass={cs.CV},
136
+ url={https://arxiv.org/abs/2509.25339},
137
+ }
138
+ ```