|
|
--- |
|
|
language: |
|
|
- en |
|
|
- fr |
|
|
- es |
|
|
- nl |
|
|
|
|
|
license: cc-by-nc-4.0 |
|
|
|
|
|
extra_gated_prompt: "This dataset is released for research and non-commercial use only." |
|
|
--- |
|
|
|
|
|
# Dataset Card for Concept |
|
|
|
|
|
This repository contains the data for the benchmark 'Concept'. The board game 'Concept', similar to a single-modal version of 'Pictionary', is presented as a multilingual benchmark to evaluate LLM's abductive reasoning skills. |
|
|
For the languages included in our experiments (i.e., English, Spanish, French, and Dutch), the game logs are provided in .jsonl, with one round per line. |
|
|
The dataset is collected from the online platform Board Game Arena (https://en.boardgamearena.com/gamepanel?game=concept) |
|
|
The dataset is **gated** to ensure responsible use and adherence to non-commercial terms. |
|
|
|
|
|
|
|
|
## Uses |
|
|
|
|
|
The use of this benchmark is **only permitted for research purposes**. |
|
|
Download the data per language like: |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds_en = load_dataset( |
|
|
"json", |
|
|
data_files="hf://datasets/IneG/concept/concept_english.jsonl", |
|
|
token=True, |
|
|
) |
|
|
``` |
|
|
|
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
Each line in the `.jsonl` file is a single **item object** with the following structure: |
|
|
|
|
|
- **`item_name`** *(string)* — Target concept. |
|
|
|
|
|
- **`metadata`** *(object)* |
|
|
- **`difficulty`** *(string)* — One of: |
|
|
`easy | hard | challenging` |
|
|
- **`card`** *(int)* — Source card or index. |
|
|
- **`found`** *(bool)* — Whether the item was correctly guessed. |
|
|
- **`game_ID`** *(string)* — Identifier of the source game. |
|
|
|
|
|
- **`steps`** *(object: `str → step`)* — Mapping from step indices (`"0"`, `"1"`, … ) to detailed step data. |
|
|
The keys preserve chronological order of the original game progression. |
|
|
|
|
|
--- |
|
|
|
|
|
### 🔁 Step object |
|
|
|
|
|
Each step describes the state of play during one round, containing the following arrays: |
|
|
|
|
|
- **`clues`** *(array of clue)* — Hint markers with semantic categories. |
|
|
Fields: |
|
|
- `id` *(int)* — ID of the action in the game |
|
|
- `mId` *(int)* — ID of the action in the round |
|
|
- `mColor` *(string)* — The color of the hint |
|
|
- `mType` *(string)* — The hierarchy of the hint (0 = high-level, 1 = low-level) |
|
|
- `x` *(string)* |
|
|
- `y` *(string)* |
|
|
- `sId` *(string)* |
|
|
- `sId_label` *(string)* — Textual representation of the clue |
|
|
|
|
|
- **`moves`** *(array)* — Board moves (e.g., color/category moves). |
|
|
Fields: |
|
|
- `mColor` |
|
|
- `sId` |
|
|
- `sId_label` |
|
|
|
|
|
- **`deletes`** *(array)* — Single removed hint. |
|
|
Fields: |
|
|
- `id` *(string)* |
|
|
- `removed_hint` *(clue)* |
|
|
|
|
|
- **`clears`** *(array)* — Batch removals by color. |
|
|
Fields: |
|
|
- `color` *(string)* |
|
|
- `removed_hints` *(array of clue)* |
|
|
|
|
|
- **`guesses`** *(array)* — All guesses made in the step. |
|
|
Fields: |
|
|
- `guess_id` *(int)* |
|
|
- `guess_raw` *(string, base64-encoded original input)* |
|
|
- `guess_text` *(string)* |
|
|
|
|
|
- **`feedbacks`** *(array)* — Feedback linked to guesses. |
|
|
Fields: |
|
|
- `guess_id` *(string or int)* |
|
|
- `feedback_code` *(int: 0 | 1 | 2)* — 0 = incorrect, 1 = almost correct, 2 = correct |
|
|
- `feedback_label` *(string)* |
|
|
|
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use the dataset, please cite the following paper: https://arxiv.org/abs/2510.13271 |
|
|
|
|
|
```bibtex |
|
|
@misc{gevers2025hintbenchmarkingllmsboard, |
|
|
title={Do You Get the Hint? Benchmarking LLMs on the Board Game Concept}, |
|
|
author={Ine Gevers and Walter Daelemans}, |
|
|
year={2025}, |
|
|
eprint={2510.13271}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2510.13271}, |
|
|
} |