File size: 2,533 Bytes
dd23d71 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | ---
dataset_info:
features:
- name: ID
dtype: int64
- name: Game
dtype: string
- name: tier
dtype: int64
- name: Question
dtype: string
- name: Answer
dtype: string
- name: game_state_url
dtype: string
configs:
- config_name: default
data_files:
- split: test
path: ludobench.parquet
license: mit
task_categories:
- visual-question-answering
- question-answering
language:
- en
size_categories:
- n<1K
pretty_name: LudoBench
tags:
- board-games
- multimodal-reasoning
- benchmark
---
# LudoBench
**LLMs as Rules Oracles: Exploring Real-World Multimodal Reasoning in Tabletop Strategy Game Environments**
*ICLR 2026*
[](https://openreview.net/forum?id=TOgQ00DEek)
[](https://huggingface.co/datasets/launch/LudoBench)
[](https://github.com/jpeper/LudoBench)
[](https://huggingface.co/spaces/launch/LudoBench)
---
A multimodal board-game reasoning benchmark evaluating LLM/VLM reasoning
across 5 strategy games and 3 difficulty tiers.
## Dataset Description
- **638** annotated question-answer pairs
- **5 games**: Kingdomino, Res Arcana, Pax Renaissance, Carcassonne, Catan
- **3 tiers**: Environment Perception (T1), Rules Integration (T2), Short-Horizon Optimization (T3)
- **3 rules modalities**: None (parametric), Text (text rulebook), Image (image rulebook)
## Fields
| Field | Description |
|-------|-------------|
| `ID` | Unique question identifier |
| `Game` | Board game name |
| `tier` | Difficulty tier (1, 2, or 3) |
| `Question` | The question text |
| `Answer` | Expected answer |
| `game_state_url` | Path(s) to game state image(s), semicolon-separated if multiple |
## Benchmark Results
See `benchmark_results.csv` for accuracy scores of 9 models across all game/tier/modality splits.
## Citation
```bibtex
@inproceedings{peper2026ludobench,
title={{LLMs} as Rules Oracles: Exploring Real-World Multimodal Reasoning in Tabletop Strategy Game Environments},
author={Peper, Joseph J. and Gandra, Sai Krishna and Zhang, Yunxiang and Chennareddy, Vaibhav and Jha, Shloki and Payani, Ali and Wang, Lu},
booktitle={Proceedings of the Fourteenth International Conference on Learning Representations (ICLR)},
year={2026},
address={Rio de Janeiro, Brazil}
}
```
|