Datasets:
File size: 1,744 Bytes
1f416ea a71a2d1 1f416ea a71a2d1 1f416ea a71a2d1 1f416ea a71a2d1 1f416ea a71a2d1 1f416ea a71a2d1 1f416ea a71a2d1 1f416ea | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | ---
license: mit
task_categories:
- visual-question-answering
- image-classification
language:
- en
tags:
- maze
- spatial-reasoning
- multimodal
- vision-language
- benchmark
size_categories:
- n<1K
---
# MazeBench
The evaluation set from the paper *From Pixels to BFS: High Maze Accuracy Does Not Imply Visual Planning*.
**Paper:** [arXiv:2603.26839](https://arxiv.org/abs/2603.26839)
**Code:** [github.com/alrod97/LLMs_mazes](https://github.com/alrod97/LLMs_mazes)
## Overview
110 procedurally generated maze images spanning 8 structural families and grid sizes from 5x5 to 20x20, with ground-truth shortest-path annotations. These are the mazes used to evaluate multimodal LLMs in the paper.
## Maze Families
- **sanity** — open corridors, minimal branching
- **bottleneck** — narrow chokepoints
- **dead_ends** — many dead-end branches
- **snake** — long winding corridors
- **symmetric** — mirror-symmetric layouts
- **trap** — trap tiles that must be avoided
- **near_goal_blocked** — goal is close but path is indirect
- **near_start_blocked** — start area has limited exits
## Contents
- `gen_maze_001.png` ... `gen_maze_110.png` — 1024x1024 maze images
- `maze_annotations.json` — ground-truth annotations (reachability, shortest path length, accepted shortest paths)
## Usage
```bash
pip install huggingface_hub
huggingface-cli download albertoRodriguez97/MazeBench --repo-type dataset --local-dir generated_mazes/
```
## Citation
```bibtex
@article{rodriguezsalgado2026mazebench,
title = {From Pixels to BFS: High Maze Accuracy Does Not Imply Visual Planning},
author = {Rodriguez Salgado, Alberto},
journal = {arXiv preprint arXiv:2603.26839},
year = {2026},
}
```
|