metadata
dataset_info:
features:
- name: ID
dtype: int64
- name: Game
dtype: string
- name: tier
dtype: int64
- name: Question
dtype: string
- name: Answer
dtype: string
- name: game_state_url
dtype: string
configs:
- config_name: default
data_files:
- split: test
path: ludobench.parquet
license: mit
task_categories:
- visual-question-answering
- question-answering
language:
- en
size_categories:
- n<1K
pretty_name: LudoBench
tags:
- board-games
- multimodal-reasoning
- benchmark
LudoBench
LLMs as Rules Oracles: Exploring Real-World Multimodal Reasoning in Tabletop Strategy Game Environments
ICLR 2026
A multimodal board-game reasoning benchmark evaluating LLM/VLM reasoning across 5 strategy games and 3 difficulty tiers.
Dataset Description
- 638 annotated question-answer pairs
- 5 games: Kingdomino, Res Arcana, Pax Renaissance, Carcassonne, Catan
- 3 tiers: Environment Perception (T1), Rules Integration (T2), Short-Horizon Optimization (T3)
- 3 rules modalities: None (parametric), Text (text rulebook), Image (image rulebook)
Fields
| Field | Description |
|---|---|
ID |
Unique question identifier |
Game |
Board game name |
tier |
Difficulty tier (1, 2, or 3) |
Question |
The question text |
Answer |
Expected answer |
game_state_url |
Path(s) to game state image(s), semicolon-separated if multiple |
Benchmark Results
See benchmark_results.csv for accuracy scores of 9 models across all game/tier/modality splits.
Citation
@inproceedings{peper2026ludobench,
title={{LLMs} as Rules Oracles: Exploring Real-World Multimodal Reasoning in Tabletop Strategy Game Environments},
author={Peper, Joseph J. and Gandra, Sai Krishna and Zhang, Yunxiang and Chennareddy, Vaibhav and Jha, Shloki and Payani, Ali and Wang, Lu},
booktitle={Proceedings of the Fourteenth International Conference on Learning Representations (ICLR)},
year={2026},
address={Rio de Janeiro, Brazil}
}