Datasets:
license: apache-2.0
task_categories:
- visual-question-answering
- image-to-text
language:
- en
tags:
- benchmark
- vision
- reasoning
- multimodal
- evaluation
pretty_name: VisRes-Bench
dataset_info:
- config_name: level_1_global_occlusion_50
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_global_occlusion_70
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_global_occlusion_80
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_edges
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_location_random_sampling
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_brightness
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_blur
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_rotation
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_rotation_random_sampling
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_edges_random_sampling
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_1_location
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 1000
- config_name: level_2_uniform_count
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_count_progression
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_uniform_orientation
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 458
- config_name: level_2_count_2_same_1_diff
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_orientation_2same_1diff
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 498
- config_name: level_2_uniform_color
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_count_arithmetic
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_count_minmax
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_orientation_3_diff
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_color_2same_1diff
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_color_3_diff
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_2_count_3_diff
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_3_spiral_color_orientation
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 350
- config_name: level_3_spiral_color_orientation
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 464
- config_name: level_3_coupled_color_count
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 500
- config_name: level_3_independent_color_object_orientation
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 355
- config_name: level_3_coupled_color_orientation
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 374
- config_name: level_3_Independent_count_object_color
features:
- name: id
dtype: string
- name: task
dtype: string
- name: level
dtype: string
- name: guided_question
dtype: string
- name: generic_question
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 479
configs:
- config_name: level_1_global_occlusion_50
data_files:
- split: test
path: level_1_global_occlusion_50percent/test-*
- config_name: level_1_global_occlusion_70
data_files:
- split: test
path: level_1_global_occlusion_70percent/test-*
- config_name: level_1_global_occlusion_80
data_files:
- split: test
path: level_1_global_occlusion_80percent/test-*
- config_name: level_1_edges
data_files:
- split: test
path: level_1_edges_eval_6k_location_only_dino_mode_options/test-*
- config_name: level_1_location_random_sampling
data_files:
- split: test
path: level_1_eval_6k_location_only_random_sampling/test-*
- config_name: level_1_brightness
data_files:
- split: test
path: level_1_eval_6k_brightness_dino_options/test-*
- config_name: level_1_blur
data_files:
- split: test
path: level_1_eval_6k_blur_dino_options/test-*
- config_name: level_1_rotation
data_files:
- split: test
path: level_1_eval_6k_rotation_direct_dino_options/test-*
- config_name: level_1_rotation_random_sampling
data_files:
- split: test
path: level_1_eval_6k_single_rotation_same_options/test-*
- config_name: level_1_edges_random_sampling
data_files:
- split: test
path: level_1_edges_eval_6k_location_only_random_sampling/test-*
- config_name: level_1_location
data_files:
- split: test
path: level_1_eval_6k_location_only_dino_mode_options/test-*
- config_name: level_2_uniform_count
data_files:
- split: test
path: level_2_count_only/test-*
- config_name: level_2_count_progression
data_files:
- split: test
path: level_2_count_progression_mixed/test-*
- config_name: level_2_uniform_orientation
data_files:
- split: test
path: level_2_orientation_only/test-*
- config_name: level_2_count_2_same_1_diff
data_files:
- split: test
path: level_2_count_distribution_2same_1diff/test-*
- config_name: level_2_orientation_2same_1diff
data_files:
- split: test
path: level_2_orientation_distribution_2same_1diff/test-*
- config_name: level_2_uniform_color
data_files:
- split: test
path: level_2_color_only/test-*
- config_name: level_2_count_arithmetic
data_files:
- split: test
path: level_2_count_operations/test-*
- config_name: level_2_count_minmax
data_files:
- split: test
path: level_2_count_minmax/test-*
- config_name: level_2_orientation_3_diff
data_files:
- split: test
path: level_2_orientation_distribution/test-*
- config_name: level_2_color_2same_1diff
data_files:
- split: test
path: level_2_color_distribution_2same_1diff/test-*
- config_name: level_2_color_3_diff
data_files:
- split: test
path: level_2_color_distribution/test-*
- config_name: level_2_count_3_diff
data_files:
- split: test
path: level_2_count_distribution/test-*
- config_name: level_3_spiral_color_orientation
data_files:
- split: test
path: level_3_compositional_spiral_orientation/test-*
- config_name: level_3_spiral_color_orientation
data_files:
- split: test
path: level_3_compositional_spiral_object_color/test-*
- config_name: level_3_coupled_color_count
data_files:
- split: test
path: level_3_coupled_count_color/test-*
- config_name: level_3_independent_color_object_rientation
data_files:
- split: test
path: level_3_independent_color_object_orientation/test-*
- config_name: level_3_coupled_color_orientation
data_files:
- split: test
path: level_3_coupled_orientation_color/test-*
- config_name: level_3_Independent_count_object_color
data_files:
- split: test
path: level_3_independent_distribution_arithmetic_object/test-*
VisRes Bench
VisRes Bench is a benchmark for evaluating the visual reasoning capabilities of Vision-Language Models (VLMs) in naturalistic settings without contextual language supervision. It is introduced in the paper VisRes Bench: On Evaluating the Visual Reasoning Capabilities of VLMs.
Paper Summary
Vision-Language Models excel at captioning and VQA, but it is unclear how much they rely on visual reasoning versus linguistic priors. VisRes addresses this by using image-only, four-choice tasks on real-world images (~19,000 samples) so that performance reflects visual reasoning rather than textual shortcuts.
The benchmark is organized in three levels of increasing complexity:
- Level 1 — Perceptual grounding: Local patch completion (masked tile + 4 candidate patches) under perturbations (blur, brightness, rotation, edges, location) and global occlusion (50% or 80% of the image masked). Tests robustness and amodal completion.
- Level 2 — Single-attribute rule: Raven-style 3×3 grids with one missing cell; one attribute (color, count, or orientation) follows a row-wise rule. Includes uniform, 3-different, 2-similar-1-different, count progression, arithmetic, and min-max subtasks (~5,956 samples).
- Level 3 — Multi-attribute composition: Same 3×3 format but multiple attributes (color, count, orientation, object identity) with row-wise, grid-wise, or spiral rules (~2,522 samples).
Main findings: State-of-the-art VLMs perform near random (25%) on many subtasks under subtle perceptual changes. Performance is stronger on color than count, and weakest on orientation. When the same logical structure is given as text, models do much better, indicating a visual-to-symbolic bottleneck rather than a pure reasoning limit. Higher resolution and guided/thinking prompts help but do not close the gap to human baselines.
Main Results (Guided Prompting, Thinking Mode When Available)
Accuracy (%) across levels and subtasks. Random chance = 25%.
| Setting | GPT-5 | GPT-4o | Gemini-2.5 | Qwen3-VL-4B | Qwen3-VL-30B | Mimo-VL-7B |
|---|---|---|---|---|---|---|
| Level-1 | ||||||
| Edges | 27.17 | 23.91 | 25.00 | 16.67 | 25.00 | 22.30 |
| Location | 23.71 | 20.62 | 26.00 | 23.16 | 22.40 | 25.77 |
| Rotation | 35.42 | 26.04 | 34.38 | 37.50 | 36.05 | 29.17 |
| Brightness | 25.26 | 27.37 | 27.37 | 31.52 | 29.47 | 27.37 |
| Blur | 31.18 | 25.26 | 26.32 | 24.73 | 24.28 | 26.32 |
| Global@50% | 42.86 | 20.88 | 57.14 | 37.50 | 47.25 | 48.35 |
| Global@80% | 32.61 | 22.83 | 36.96 | 25.88 | 35.87 | 30.43 |
| Level-1 Average | 31.10 | 23.86 | 33.28 | 28.17 | 31.20 | 29.22 |
| Level-2 | ||||||
| Uniform Color | 96.00 | 21.00 | 97.00 | 66.20 | 88.00 | 78.95 |
| Uniform Count | 61.00 | 25.00 | 90.91 | 40.82 | 59.00 | 52.75 |
| Uniform Orientation | 22.22 | 25.25 | 26.53 | 26.00 | 23.00 | 19.19 |
| Count Progression | 50.00 | 13.00 | 77.00 | 37.20 | 48.00 | 36.96 |
| Count Arithmetic | 52.00 | 22.00 | 75.76 | 43.20 | 49.00 | 33.33 |
| Level-2 Average | 49.79 | 24.12 | 62.29 | 37.18 | 46.75 | 39.15 |
| Level-3 | ||||||
| Independent Color-Object-Orientation | 34.00 | 25.25 | 38.00 | 27.39 | 32.60 | 19.00 |
| Independent Count-Object-Color | 34.00 | 24.00 | 44.00 | 29.45 | 36.34 | 29.00 |
| Coupled Color-Orientation | 24.24 | 24.00 | 16.33 | 26.13 | 29.43 | 20.00 |
| Coupled Color-Count | 30.00 | 22.00 | 21.21 | 27.46 | 33.33 | 28.00 |
| Spiral Color-Count-Object | 56.00 | 30.00 | 54.17 | 28.63 | 36.00 | 33.00 |
| Level-3 Average | 34.39 | 23.86 | 33.73 | 26.31 | 31.36 | 25.17 |
Finetuning on Level-1 (Qwen2.5-VL-3B)
| Setting | Original | Finetuned | Human Baseline |
|---|---|---|---|
| Location | 24.3 | 42.8 | 94.1 |
| Blur | 23.9 | 37.5 | 84.3 |
| Brightness | 23.7 | 39.8 | 85.6 |
| Rotation | 25.5 | 50.8 | 92.0 |
| Edges | 25.1 | 33.2 | 82.6 |
| Global (50%) | 24.9 | 52.2 | 96.1 |
| Global (80%) | 23.9 | 38.6 | 98.0 |
| Average | 24.5 | 43.7 | 90.4 |
Single-Attribute Recognition (Perceptual Grounding)
Accuracy (%) when models are asked to report a single attribute (color, orientation, or count) for one grid cell.
| Attribute | GPT-4o | GPT-5 |
|---|---|---|
| Color | 84.6 | 97.6 |
| Orientation | 39.8 | 49.6 |
| Count | 72.4 | 94.2 |
Impact of Thinking Mode
Accuracy (%) with thinking mode enabled (✓) vs disabled (✗). Open-source models improve substantially with thinking.
| Level | GPT-5 (high) | GPT-5 (low) | Mimo-VL ✓ | Mimo-VL ✗ | Qwen3-4B ✓ | Qwen3-4B ✗ | Qwen3-30B ✓ | Qwen3-30B ✗ |
|---|---|---|---|---|---|---|---|---|
| Level-1 | 32.61 | 31.43 | 29.22 | 23.91 | 28.17 | 23.16 | 31.20 | 23.60 |
| Level-2 | 49.79 | 47.01 | 39.15 | 26.68 | 37.18 | 24.08 | 46.75 | 28.25 |
| Level-3 | 34.39 | 32.89 | 25.17 | 25.23 | 26.31 | 23.50 | 31.36 | 24.00 |
Impact of Image Resolution (GPT-5)
Accuracy (%) at different input resolutions. All levels improve with higher resolution.
| Resolution | Level-1 | Level-2 | Level-3 |
|---|---|---|---|
| 512×512 | 45.17 | 42.83 | 31.63 |
| 1024×1024 | 54.01 | 49.61 | 35.48 |
| 2048×2048 | 56.51 | 48.99 | 40.07 |
Citation
@article{visres2025,
title={VisRes Bench: On Evaluating the Visual Reasoning Capabilities of VLMs},
author={Malagurski T{\"o}rtei, Brigitta and Dahou, Yasser and Huynh, Ngoc Dung and others},
journal={arXiv preprint arXiv:2512.21194},
year={2025}
}