Datasets:
File size: 1,596 Bytes
a120d64 6361134 a120d64 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | ---
license: other
license_name: mit-attribution
task_categories:
- reinforcement-learning
tags:
- "2048"
- game-ai
- supervised-learning
- imitation-learning
size_categories:
- 10K<n<100K
---
# 2048 Expert Gameplay Dataset
State-action pairs from an expert N-Tuple Network agent playing 2048.
Can be used for imitation learning / supervised training of 2048 agents.
## Stats
- Source games: 10,000
- Games after filtering: 9,000
- Total moves: 54,010,983
- Average score: 143,847
- Win rate (>= 2048): 100% (losing games removed)
- Score floor: 62,152 (bottom 10% removed)
## Files
- `train.jsonl` - 8,100 games (48,592,790 moves) for training
- `val.jsonl` - 900 games (5,418,193 moves) for validation
- `games.jsonl` - original unfiltered dataset (10,000 games)
## Filtering
The raw dataset was filtered to improve supervised learning quality:
1. Removed games that did not reach the 2048 tile (losing games with desperate end-moves)
2. Removed bottom 10% by score (bad-luck games with messy board patterns)
3. Split 90/10 into train/validation (every 10th kept game to validation)
## Format
JSONL format, one game per line. Each game object:
```json
{
"game_id": 0,
"score": 142056,
"max_tile": 8192,
"num_moves": 2341,
"moves": [
{
"board": [0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0],
"action": 0,
"action_name": "up",
"reward": 0,
"move_number": 0
}
]
}
```
- `board`: 16 tile values in row-major order (0=empty, 2/4/8/...)
- `action`: 0=up, 1=right, 2=down, 3=left
- `reward`: points from merges on this move
|