MazeBench / README.md
albertoRodriguez97's picture
Upload folder using huggingface_hub
a71a2d1 verified
metadata
license: mit
task_categories:
  - visual-question-answering
  - image-classification
language:
  - en
tags:
  - maze
  - spatial-reasoning
  - multimodal
  - vision-language
  - benchmark
size_categories:
  - n<1K

MazeBench

The evaluation set from the paper From Pixels to BFS: High Maze Accuracy Does Not Imply Visual Planning.

Paper: arXiv:2603.26839

Code: github.com/alrod97/LLMs_mazes

Overview

110 procedurally generated maze images spanning 8 structural families and grid sizes from 5x5 to 20x20, with ground-truth shortest-path annotations. These are the mazes used to evaluate multimodal LLMs in the paper.

Maze Families

  • sanity — open corridors, minimal branching
  • bottleneck — narrow chokepoints
  • dead_ends — many dead-end branches
  • snake — long winding corridors
  • symmetric — mirror-symmetric layouts
  • trap — trap tiles that must be avoided
  • near_goal_blocked — goal is close but path is indirect
  • near_start_blocked — start area has limited exits

Contents

  • gen_maze_001.png ... gen_maze_110.png — 1024x1024 maze images
  • maze_annotations.json — ground-truth annotations (reachability, shortest path length, accepted shortest paths)

Usage

pip install huggingface_hub
huggingface-cli download albertoRodriguez97/MazeBench --repo-type dataset --local-dir generated_mazes/

Citation

@article{rodriguezsalgado2026mazebench,
  title   = {From Pixels to BFS: High Maze Accuracy Does Not Imply Visual Planning},
  author  = {Rodriguez Salgado, Alberto},
  journal = {arXiv preprint arXiv:2603.26839},
  year    = {2026},
}