jpeper commited on
Commit
dd23d71
·
verified ·
1 Parent(s): 02ac355

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +84 -0
  2. benchmark_results.csv +28 -0
  3. ludobench.csv +0 -0
  4. ludobench.parquet +3 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: ID
5
+ dtype: int64
6
+ - name: Game
7
+ dtype: string
8
+ - name: tier
9
+ dtype: int64
10
+ - name: Question
11
+ dtype: string
12
+ - name: Answer
13
+ dtype: string
14
+ - name: game_state_url
15
+ dtype: string
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: test
20
+ path: ludobench.parquet
21
+ license: mit
22
+ task_categories:
23
+ - visual-question-answering
24
+ - question-answering
25
+ language:
26
+ - en
27
+ size_categories:
28
+ - n<1K
29
+ pretty_name: LudoBench
30
+ tags:
31
+ - board-games
32
+ - multimodal-reasoning
33
+ - benchmark
34
+ ---
35
+
36
+ # LudoBench
37
+
38
+ **LLMs as Rules Oracles: Exploring Real-World Multimodal Reasoning in Tabletop Strategy Game Environments**
39
+
40
+ *ICLR 2026*
41
+
42
+ [![Paper](https://img.shields.io/badge/Paper-OpenReview-blue)](https://openreview.net/forum?id=TOgQ00DEek)
43
+ [![Dataset](https://img.shields.io/badge/Dataset-HuggingFace-yellow)](https://huggingface.co/datasets/launch/LudoBench)
44
+ [![GitHub](https://img.shields.io/badge/Code-GitHub-black)](https://github.com/jpeper/LudoBench)
45
+ [![Demo](https://img.shields.io/badge/Demo-HF%20Space-orange)](https://huggingface.co/spaces/launch/LudoBench)
46
+
47
+ ---
48
+
49
+ A multimodal board-game reasoning benchmark evaluating LLM/VLM reasoning
50
+ across 5 strategy games and 3 difficulty tiers.
51
+
52
+ ## Dataset Description
53
+
54
+ - **638** annotated question-answer pairs
55
+ - **5 games**: Kingdomino, Res Arcana, Pax Renaissance, Carcassonne, Catan
56
+ - **3 tiers**: Environment Perception (T1), Rules Integration (T2), Short-Horizon Optimization (T3)
57
+ - **3 rules modalities**: None (parametric), Text (text rulebook), Image (image rulebook)
58
+
59
+ ## Fields
60
+
61
+ | Field | Description |
62
+ |-------|-------------|
63
+ | `ID` | Unique question identifier |
64
+ | `Game` | Board game name |
65
+ | `tier` | Difficulty tier (1, 2, or 3) |
66
+ | `Question` | The question text |
67
+ | `Answer` | Expected answer |
68
+ | `game_state_url` | Path(s) to game state image(s), semicolon-separated if multiple |
69
+
70
+ ## Benchmark Results
71
+
72
+ See `benchmark_results.csv` for accuracy scores of 9 models across all game/tier/modality splits.
73
+
74
+ ## Citation
75
+
76
+ ```bibtex
77
+ @inproceedings{peper2026ludobench,
78
+ title={{LLMs} as Rules Oracles: Exploring Real-World Multimodal Reasoning in Tabletop Strategy Game Environments},
79
+ author={Peper, Joseph J. and Gandra, Sai Krishna and Zhang, Yunxiang and Chennareddy, Vaibhav and Jha, Shloki and Payani, Ali and Wang, Lu},
80
+ booktitle={Proceedings of the Fourteenth International Conference on Learning Representations (ICLR)},
81
+ year={2026},
82
+ address={Rio de Janeiro, Brazil}
83
+ }
84
+ ```
benchmark_results.csv ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ row,KingD | None,KingD | Text,KingD | Image,Res Arcana | None,Res Arcana | Text,Res Arcana | Image,Pax Ren. | None,Pax Ren. | Text,Pax Ren. | Image,Carca. | None,Carca. | Text,Carca. | Image,Catan | None,Catan | Text,Catan | Image
2
+ GPT-4o | T1,0.381,0.571,0.429,0.65,0.65,0.575,0.375,0.475,0.6,0.425,0.35,0.4,0.45,0.45,0.45
3
+ o1 | T1,0.524,0.429,0.429,0.675,0.525,0.6,0.45,0.55,0.475,0.45,0.45,0.375,0.475,0.55,0.55
4
+ GPT-4.1 | T1,0.619,0.524,0.619,0.775,0.725,0.75,0.525,0.575,0.6,0.575,0.4,0.475,0.575,0.45,0.65
5
+ o3 | T1,0.75,0.675,0.65,0.775,0.775,0.7,0.475,0.675,0.65,0.45,0.5,0.575,0.6,0.575,0.55
6
+ GPT-5.1 | T1,0.75,0.65,0.675,0.775,0.8,0.8,0.6,0.6,0.65,0.725,0.525,0.55,0.6,0.575,0.575
7
+ Gemini 2.5 Flash | T1,0.524,0.476,0.381,0.675,0.775,0.5,0.55,0.4,0.375,0.45,0.4,0.35,0.625,0.5,0.325
8
+ Gemini 2.5 Pro | T1,0.524,0.524,0.333,0.65,0.7,0.525,0.65,0.725,0.375,0.425,0.35,0.45,0.575,0.65,0.425
9
+ Gemini 3 Pro | T1,0.825,0.8,0.85,0.875,0.85,0.775,0.775,0.75,0.8,0.825,0.75,0.65,0.525,0.625,0.7
10
+ Claude 4.5 Sonnet | T1,0.571,0.619,0.524,0.65,0.8,0.8,0.6,0.65,,0.375,0.325,0.3,0.6,0.625,
11
+ GPT-4o | T2,0.3,0.433,0.3,0.225,0.4,0.35,0.125,0.525,0.475,0.125,0.15,0.225,0.225,0.2,0.2
12
+ o1 | T2,0.2,0.4,0.267,0.35,0.35,0.35,0.25,0.4,0.45,0.225,0.25,0.3,0.15,0.25,0.225
13
+ GPT-4.1 | T2,0.333,0.433,0.3,0.4,0.5,0.475,0.275,0.4,0.375,0.175,0.2,0.15,0.325,0.225,0.325
14
+ o3 | T2,0.35,0.4,0.375,0.325,0.625,0.475,0.3,0.55,0.575,0.275,0.275,0.275,0.375,0.275,0.275
15
+ GPT-5.1 | T2,0.3,0.325,0.275,0.325,0.525,0.525,0.2,0.6,0.467,0.25,0.25,0.325,0.275,0.275,0.3
16
+ Gemini 2.5 Flash | T2,0.167,0.3,0.267,0.225,0.3,0.3,0.25,0.3,0.375,0.25,0.3,0.2,0.25,0.275,0.125
17
+ Gemini 2.5 Pro | T2,0.367,0.367,0.267,0.375,0.475,0.375,0.1,0.4,0.25,0.225,0.275,0.15,0.275,0.35,0.175
18
+ Gemini 3 Pro | T2,0.725,0.675,0.75,0.55,0.625,0.65,0.325,0.475,0.425,0.425,0.425,0.325,0.5,0.325,0.5
19
+ Claude 4.5 Sonnet | T2,0.3,0.333,0.233,0.433,0.633,0.4,0.3,0.533,,0.275,0.275,0.375,0.225,0.275,
20
+ GPT-4o | T3,0.0,0.0,0.0,0.02,0.06,0.0,0.053,0.053,0.018,0.04,0.0,0.04,0.065,0.0,0.032
21
+ o1 | T3,0.0,0.02,0.0,0.04,0.06,0.04,0.105,0.018,0.053,0.08,0.04,0.04,0.032,0.065,0.032
22
+ GPT-4.1 | T3,0.04,0.08,0.1,0.02,0.02,0.02,0.123,0.123,0.088,0.02,0.0,0.06,0.194,0.065,0.032
23
+ o3 | T3,0.06,0.12,0.04,0.0,0.06,0.06,0.07,0.175,0.211,0.12,0.06,0.1,0.194,0.226,0.258
24
+ GPT-5.1 | T3,0.16,0.06,0.06,0.08,0.08,0.08,0.123,0.088,0.211,0.02,0.1,0.1,0.129,0.161,0.161
25
+ Gemini 2.5 Flash | T3,0.0,0.02,0.0,0.0,0.08,0.08,0.018,0.053,0.0,0.02,0.06,0.04,0.032,0.0,0.065
26
+ Gemini 2.5 Pro | T3,0.04,0.02,0.0,0.12,0.3,0.08,0.123,0.053,0.088,0.06,0.06,0.04,0.129,0.097,0.161
27
+ Gemini 3 Pro | T3,0.16,0.16,0.14,0.2,0.26,0.18,0.053,0.105,0.193,0.06,0.02,0.04,0.129,0.161,0.032
28
+ Claude 4.5 Sonnet | T3,0.02,0.04,0.04,0.04,0.06,0.08,0.053,0.088,,0.04,0.02,0.06,0.0,0.032,
ludobench.csv ADDED
The diff for this file is too large to render. See raw diff
 
ludobench.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57085d48a31e89b0fa050b7dd923d90ccb84dd5c1bfd94c3f357a0f731f14a2f
3
+ size 43073