Upload folder using huggingface_hub
Browse files- README.md +73 -0
- benchmark_results.csv +28 -0
- ludobench.csv +0 -0
- ludobench.parquet +3 -0
README.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
dataset_info:
|
| 3 |
+
features:
|
| 4 |
+
- name: ID
|
| 5 |
+
dtype: int64
|
| 6 |
+
- name: Game
|
| 7 |
+
dtype: string
|
| 8 |
+
- name: tier
|
| 9 |
+
dtype: int64
|
| 10 |
+
- name: Question
|
| 11 |
+
dtype: string
|
| 12 |
+
- name: Answer
|
| 13 |
+
dtype: string
|
| 14 |
+
- name: game_state_url
|
| 15 |
+
dtype: string
|
| 16 |
+
- name: json_game_state_url
|
| 17 |
+
dtype: string
|
| 18 |
+
configs:
|
| 19 |
+
- config_name: default
|
| 20 |
+
data_files:
|
| 21 |
+
- split: test
|
| 22 |
+
path: ludobench.parquet
|
| 23 |
+
license: mit
|
| 24 |
+
task_categories:
|
| 25 |
+
- visual-question-answering
|
| 26 |
+
- question-answering
|
| 27 |
+
language:
|
| 28 |
+
- en
|
| 29 |
+
size_categories:
|
| 30 |
+
- n<1K
|
| 31 |
+
pretty_name: LudoBench
|
| 32 |
+
tags:
|
| 33 |
+
- board-games
|
| 34 |
+
- multimodal-reasoning
|
| 35 |
+
- benchmark
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
# LudoBench
|
| 39 |
+
|
| 40 |
+
A multimodal board-game reasoning benchmark evaluating LLM/VLM reasoning
|
| 41 |
+
across 5 strategy games and 3 difficulty tiers.
|
| 42 |
+
|
| 43 |
+
## Dataset Description
|
| 44 |
+
|
| 45 |
+
- **638** annotated question-answer pairs
|
| 46 |
+
- **5 games**: Kingdomino, Res Arcana, Pax Renaissance, Carcassonne, Catan
|
| 47 |
+
- **3 tiers**: Environment Perception (T1), Rules Integration (T2), Short-Horizon Optimization (T3)
|
| 48 |
+
- **3 modalities**: None (parametric), Text (text rulebook), Image (image rulebook)
|
| 49 |
+
|
| 50 |
+
## Fields
|
| 51 |
+
|
| 52 |
+
| Field | Description |
|
| 53 |
+
|-------|-------------|
|
| 54 |
+
| `ID` | Unique question identifier |
|
| 55 |
+
| `Game` | Board game name |
|
| 56 |
+
| `tier` | Difficulty tier (1, 2, or 3) |
|
| 57 |
+
| `Question` | The question text |
|
| 58 |
+
| `Answer` | Expected answer |
|
| 59 |
+
| `game_state_url` | Path(s) to game state image(s), semicolon-separated if multiple |
|
| 60 |
+
| `json_game_state_url` | Path to JSON game state |
|
| 61 |
+
|
| 62 |
+
## Benchmark Results
|
| 63 |
+
|
| 64 |
+
See `benchmark_results.csv` for accuracy scores of 9 models across all game/tier/modality splits.
|
| 65 |
+
|
| 66 |
+
## Citation
|
| 67 |
+
|
| 68 |
+
```bibtex
|
| 69 |
+
@article{ludobench2026,
|
| 70 |
+
title={LudoBench: Evaluating Multimodal Reasoning through Real-World Tabletop Strategy Games},
|
| 71 |
+
year={2026}
|
| 72 |
+
}
|
| 73 |
+
```
|
benchmark_results.csv
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
row,KingD | None,KingD | Text,KingD | Image,Res Arcana | None,Res Arcana | Text,Res Arcana | Image,Pax Ren. | None,Pax Ren. | Text,Pax Ren. | Image,Carca. | None,Carca. | Text,Carca. | Image,Catan | None,Catan | Text,Catan | Image
|
| 2 |
+
GPT-4o | T1,0.381,0.571,0.429,0.65,0.65,0.575,0.375,0.475,0.6,0.425,0.35,0.4,0.45,0.45,0.45
|
| 3 |
+
o1 | T1,0.524,0.429,0.429,0.675,0.525,0.6,0.45,0.55,0.475,0.45,0.45,0.375,0.475,0.55,0.55
|
| 4 |
+
GPT-4.1 | T1,0.619,0.524,0.619,0.775,0.725,0.75,0.525,0.575,0.6,0.575,0.4,0.475,0.575,0.45,0.65
|
| 5 |
+
o3 | T1,0.75,0.675,0.65,0.775,0.775,0.7,0.475,0.675,0.65,0.45,0.5,0.575,0.6,0.575,0.55
|
| 6 |
+
GPT-5.1 | T1,0.75,0.65,0.675,0.775,0.8,0.8,0.6,0.6,0.65,0.725,0.525,0.55,0.6,0.575,0.575
|
| 7 |
+
Gemini 2.5 Flash | T1,0.524,0.476,0.381,0.675,0.775,0.5,0.55,0.4,0.375,0.45,0.4,0.35,0.625,0.5,0.325
|
| 8 |
+
Gemini 2.5 Pro | T1,0.524,0.524,0.333,0.65,0.7,0.525,0.65,0.725,0.375,0.425,0.35,0.45,0.575,0.65,0.425
|
| 9 |
+
Gemini 3 Pro | T1,0.825,0.8,0.85,0.875,0.85,0.775,0.775,0.75,0.8,0.825,0.75,0.65,0.525,0.625,0.7
|
| 10 |
+
Claude 4.5 Sonnet | T1,0.571,0.619,0.524,0.65,0.8,0.8,0.6,0.65,,0.375,0.325,0.3,0.6,0.625,
|
| 11 |
+
GPT-4o | T2,0.3,0.433,0.3,0.225,0.4,0.35,0.125,0.525,0.475,0.125,0.15,0.225,0.225,0.2,0.2
|
| 12 |
+
o1 | T2,0.2,0.4,0.267,0.35,0.35,0.35,0.25,0.4,0.45,0.225,0.25,0.3,0.15,0.25,0.225
|
| 13 |
+
GPT-4.1 | T2,0.333,0.433,0.3,0.4,0.5,0.475,0.275,0.4,0.375,0.175,0.2,0.15,0.325,0.225,0.325
|
| 14 |
+
o3 | T2,0.35,0.4,0.375,0.325,0.625,0.475,0.3,0.55,0.575,0.275,0.275,0.275,0.375,0.275,0.275
|
| 15 |
+
GPT-5.1 | T2,0.3,0.325,0.275,0.325,0.525,0.525,0.2,0.6,0.467,0.25,0.25,0.325,0.275,0.275,0.3
|
| 16 |
+
Gemini 2.5 Flash | T2,0.167,0.3,0.267,0.225,0.3,0.3,0.25,0.3,0.375,0.25,0.3,0.2,0.25,0.275,0.125
|
| 17 |
+
Gemini 2.5 Pro | T2,0.367,0.367,0.267,0.375,0.475,0.375,0.1,0.4,0.25,0.225,0.275,0.15,0.275,0.35,0.175
|
| 18 |
+
Gemini 3 Pro | T2,0.725,0.675,0.75,0.55,0.625,0.65,0.325,0.475,0.425,0.425,0.425,0.325,0.5,0.325,0.5
|
| 19 |
+
Claude 4.5 Sonnet | T2,0.3,0.333,0.233,0.433,0.633,0.4,0.3,0.533,,0.275,0.275,0.375,0.225,0.275,
|
| 20 |
+
GPT-4o | T3,0.0,0.0,0.0,0.02,0.06,0.0,0.053,0.053,0.018,0.04,0.0,0.04,0.065,0.0,0.032
|
| 21 |
+
o1 | T3,0.0,0.02,0.0,0.04,0.06,0.04,0.105,0.018,0.053,0.08,0.04,0.04,0.032,0.065,0.032
|
| 22 |
+
GPT-4.1 | T3,0.04,0.08,0.1,0.02,0.02,0.02,0.123,0.123,0.088,0.02,0.0,0.06,0.194,0.065,0.032
|
| 23 |
+
o3 | T3,0.06,0.12,0.04,0.0,0.06,0.06,0.07,0.175,0.211,0.12,0.06,0.1,0.194,0.226,0.258
|
| 24 |
+
GPT-5.1 | T3,0.16,0.06,0.06,0.08,0.08,0.08,0.123,0.088,0.211,0.02,0.1,0.1,0.129,0.161,0.161
|
| 25 |
+
Gemini 2.5 Flash | T3,0.0,0.02,0.0,0.0,0.08,0.08,0.018,0.053,0.0,0.02,0.06,0.04,0.032,0.0,0.065
|
| 26 |
+
Gemini 2.5 Pro | T3,0.04,0.02,0.0,0.12,0.3,0.08,0.123,0.053,0.088,0.06,0.06,0.04,0.129,0.097,0.161
|
| 27 |
+
Gemini 3 Pro | T3,0.16,0.16,0.14,0.2,0.26,0.18,0.053,0.105,0.193,0.06,0.02,0.04,0.129,0.161,0.032
|
| 28 |
+
Claude 4.5 Sonnet | T3,0.02,0.04,0.04,0.04,0.06,0.08,0.053,0.088,,0.04,0.02,0.06,0.0,0.032,
|
ludobench.csv
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
ludobench.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8ea30674c9e06add0ed4ed33ef138c3b53498c2a15932068d57672a938e32155
|
| 3 |
+
size 45259
|