visres_bench / README.md
dunghuynh's picture
Update README.md
e785457 verified
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
  - image-to-text
language:
  - en
tags:
  - benchmark
  - vision
  - reasoning
  - multimodal
  - evaluation
pretty_name: VisRes-Bench
dataset_info:
  - config_name: level_1_global_occlusion_50
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_global_occlusion_70
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_global_occlusion_80
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_edges
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_location_random_sampling
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_brightness
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_blur
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_rotation
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_rotation_random_sampling
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_edges_random_sampling
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_1_location
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 1000
  - config_name: level_2_uniform_count
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_count_progression
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_uniform_orientation
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 458
  - config_name: level_2_count_2_same_1_diff
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_orientation_2same_1diff
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 498
  - config_name: level_2_uniform_color
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_count_arithmetic
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_count_minmax
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_orientation_3_diff
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_color_2same_1diff
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_color_3_diff
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_2_count_3_diff
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_3_spiral_color_orientation
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 350
  - config_name: level_3_spiral_color_orientation
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 464
  - config_name: level_3_coupled_color_count
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 500
  - config_name: level_3_independent_color_object_orientation
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 355
  - config_name: level_3_coupled_color_orientation
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 374
  - config_name: level_3_Independent_count_object_color
    features:
      - name: id
        dtype: string
      - name: task
        dtype: string
      - name: level
        dtype: string
      - name: guided_question
        dtype: string
      - name: generic_question
        dtype: string
      - name: images
        sequence: image
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: test
        num_examples: 479
configs:
  - config_name: level_1_global_occlusion_50
    data_files:
      - split: test
        path: level_1_global_occlusion_50percent/test-*
  - config_name: level_1_global_occlusion_70
    data_files:
      - split: test
        path: level_1_global_occlusion_70percent/test-*
  - config_name: level_1_global_occlusion_80
    data_files:
      - split: test
        path: level_1_global_occlusion_80percent/test-*
  - config_name: level_1_edges
    data_files:
      - split: test
        path: level_1_edges_eval_6k_location_only_dino_mode_options/test-*
  - config_name: level_1_location_random_sampling
    data_files:
      - split: test
        path: level_1_eval_6k_location_only_random_sampling/test-*
  - config_name: level_1_brightness
    data_files:
      - split: test
        path: level_1_eval_6k_brightness_dino_options/test-*
  - config_name: level_1_blur
    data_files:
      - split: test
        path: level_1_eval_6k_blur_dino_options/test-*
  - config_name: level_1_rotation
    data_files:
      - split: test
        path: level_1_eval_6k_rotation_direct_dino_options/test-*
  - config_name: level_1_rotation_random_sampling
    data_files:
      - split: test
        path: level_1_eval_6k_single_rotation_same_options/test-*
  - config_name: level_1_edges_random_sampling
    data_files:
      - split: test
        path: level_1_edges_eval_6k_location_only_random_sampling/test-*
  - config_name: level_1_location
    data_files:
      - split: test
        path: level_1_eval_6k_location_only_dino_mode_options/test-*
  - config_name: level_2_uniform_count
    data_files:
      - split: test
        path: level_2_count_only/test-*
  - config_name: level_2_count_progression
    data_files:
      - split: test
        path: level_2_count_progression_mixed/test-*
  - config_name: level_2_uniform_orientation
    data_files:
      - split: test
        path: level_2_orientation_only/test-*
  - config_name: level_2_count_2_same_1_diff
    data_files:
      - split: test
        path: level_2_count_distribution_2same_1diff/test-*
  - config_name: level_2_orientation_2same_1diff
    data_files:
      - split: test
        path: level_2_orientation_distribution_2same_1diff/test-*
  - config_name: level_2_uniform_color
    data_files:
      - split: test
        path: level_2_color_only/test-*
  - config_name: level_2_count_arithmetic
    data_files:
      - split: test
        path: level_2_count_operations/test-*
  - config_name: level_2_count_minmax
    data_files:
      - split: test
        path: level_2_count_minmax/test-*
  - config_name: level_2_orientation_3_diff
    data_files:
      - split: test
        path: level_2_orientation_distribution/test-*
  - config_name: level_2_color_2same_1diff
    data_files:
      - split: test
        path: level_2_color_distribution_2same_1diff/test-*
  - config_name: level_2_color_3_diff
    data_files:
      - split: test
        path: level_2_color_distribution/test-*
  - config_name: level_2_count_3_diff
    data_files:
      - split: test
        path: level_2_count_distribution/test-*
  - config_name: level_3_spiral_color_orientation
    data_files:
      - split: test
        path: level_3_compositional_spiral_orientation/test-*
  - config_name: level_3_spiral_color_orientation
    data_files:
      - split: test
        path: level_3_compositional_spiral_object_color/test-*
  - config_name: level_3_coupled_color_count
    data_files:
      - split: test
        path: level_3_coupled_count_color/test-*
  - config_name: level_3_independent_color_object_rientation
    data_files:
      - split: test
        path: level_3_independent_color_object_orientation/test-*
  - config_name: level_3_coupled_color_orientation
    data_files:
      - split: test
        path: level_3_coupled_orientation_color/test-*
  - config_name: level_3_Independent_count_object_color
    data_files:
      - split: test
        path: level_3_independent_distribution_arithmetic_object/test-*

VisRes Bench

GitHub arXiv

VisRes Bench is a benchmark for evaluating the visual reasoning capabilities of Vision-Language Models (VLMs) in naturalistic settings without contextual language supervision. It is introduced in the paper VisRes Bench: On Evaluating the Visual Reasoning Capabilities of VLMs.

Paper Summary

Vision-Language Models excel at captioning and VQA, but it is unclear how much they rely on visual reasoning versus linguistic priors. VisRes addresses this by using image-only, four-choice tasks on real-world images (~19,000 samples) so that performance reflects visual reasoning rather than textual shortcuts.

The benchmark is organized in three levels of increasing complexity:

  • Level 1 — Perceptual grounding: Local patch completion (masked tile + 4 candidate patches) under perturbations (blur, brightness, rotation, edges, location) and global occlusion (50% or 80% of the image masked). Tests robustness and amodal completion.
  • Level 2 — Single-attribute rule: Raven-style 3×3 grids with one missing cell; one attribute (color, count, or orientation) follows a row-wise rule. Includes uniform, 3-different, 2-similar-1-different, count progression, arithmetic, and min-max subtasks (~5,956 samples).
  • Level 3 — Multi-attribute composition: Same 3×3 format but multiple attributes (color, count, orientation, object identity) with row-wise, grid-wise, or spiral rules (~2,522 samples).

Main findings: State-of-the-art VLMs perform near random (25%) on many subtasks under subtle perceptual changes. Performance is stronger on color than count, and weakest on orientation. When the same logical structure is given as text, models do much better, indicating a visual-to-symbolic bottleneck rather than a pure reasoning limit. Higher resolution and guided/thinking prompts help but do not close the gap to human baselines.


Main Results (Guided Prompting, Thinking Mode When Available)

Accuracy (%) across levels and subtasks. Random chance = 25%.

Setting GPT-5 GPT-4o Gemini-2.5 Qwen3-VL-4B Qwen3-VL-30B Mimo-VL-7B
Level-1
Edges27.1723.9125.0016.6725.0022.30
Location23.7120.6226.0023.1622.4025.77
Rotation35.4226.0434.3837.5036.0529.17
Brightness25.2627.3727.3731.5229.4727.37
Blur31.1825.2626.3224.7324.2826.32
Global@50%42.8620.8857.1437.5047.2548.35
Global@80%32.6122.8336.9625.8835.8730.43
Level-1 Average31.1023.8633.2828.1731.2029.22
Level-2
Uniform Color96.0021.0097.0066.2088.0078.95
Uniform Count61.0025.0090.9140.8259.0052.75
Uniform Orientation22.2225.2526.5326.0023.0019.19
Count Progression50.0013.0077.0037.2048.0036.96
Count Arithmetic52.0022.0075.7643.2049.0033.33
Level-2 Average49.7924.1262.2937.1846.7539.15
Level-3
Independent Color-Object-Orientation34.0025.2538.0027.3932.6019.00
Independent Count-Object-Color34.0024.0044.0029.4536.3429.00
Coupled Color-Orientation24.2424.0016.3326.1329.4320.00
Coupled Color-Count30.0022.0021.2127.4633.3328.00
Spiral Color-Count-Object56.0030.0054.1728.6336.0033.00
Level-3 Average34.3923.8633.7326.3131.3625.17

Finetuning on Level-1 (Qwen2.5-VL-3B)

Setting Original Finetuned Human Baseline
Location24.342.894.1
Blur23.937.584.3
Brightness23.739.885.6
Rotation25.550.892.0
Edges25.133.282.6
Global (50%)24.952.296.1
Global (80%)23.938.698.0
Average24.543.790.4

Single-Attribute Recognition (Perceptual Grounding)

Accuracy (%) when models are asked to report a single attribute (color, orientation, or count) for one grid cell.

Attribute GPT-4o GPT-5
Color84.697.6
Orientation39.849.6
Count72.494.2

Impact of Thinking Mode

Accuracy (%) with thinking mode enabled (✓) vs disabled (✗). Open-source models improve substantially with thinking.

Level GPT-5 (high) GPT-5 (low) Mimo-VL ✓ Mimo-VL ✗ Qwen3-4B ✓ Qwen3-4B ✗ Qwen3-30B ✓ Qwen3-30B ✗
Level-132.6131.4329.2223.9128.1723.1631.2023.60
Level-249.7947.0139.1526.6837.1824.0846.7528.25
Level-334.3932.8925.1725.2326.3123.5031.3624.00

Impact of Image Resolution (GPT-5)

Accuracy (%) at different input resolutions. All levels improve with higher resolution.

Resolution Level-1 Level-2 Level-3
512×51245.1742.8331.63
1024×102454.0149.6135.48
2048×204856.5148.9940.07

Citation

@article{visres2025,
  title={VisRes Bench: On Evaluating the Visual Reasoning Capabilities of VLMs},
  author={Malagurski T{\"o}rtei, Brigitta and Dahou, Yasser and Huynh, Ngoc Dung and others},
  journal={arXiv preprint arXiv:2512.21194},
  year={2025}
}