Datasets:
game_id int64 3 6.32k | score int64 62.2k 303k | max_tile int64 2.05k 16.4k | num_moves int64 2.89k 11.7k | moves listlengths 2.89k 11.7k |
|---|---|---|---|---|
3 | 132,736 | 8,192 | 5,579 | [{"board":[0,0,0,0,0,0,0,2,0,0,0,0,0,2,0,0],"action":0,"action_name":"up","reward":0,"move_number":0(...TRUNCATED) |
5 | 78,104 | 4,096 | 3,581 | [{"board":[0,0,0,0,2,0,0,0,0,2,0,0,0,0,0,0],"action":0,"action_name":"up","reward":0,"move_number":0(...TRUNCATED) |
6 | 144,524 | 8,192 | 5,881 | [{"board":[2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"action":1,"action_name":"right","reward":0,"move_number(...TRUNCATED) |
7 | 80,124 | 4,096 | 3,678 | [{"board":[0,0,0,0,4,0,0,0,0,0,0,0,0,2,0,0],"action":1,"action_name":"right","reward":0,"move_number(...TRUNCATED) |
8 | 165,916 | 8,192 | 6,834 | [{"board":[0,0,0,0,0,0,0,0,0,0,0,0,2,2,0,0],"action":3,"action_name":"left","reward":0,"move_number"(...TRUNCATED) |
9 | 155,536 | 8,192 | 6,402 | [{"board":[0,0,0,0,0,0,0,0,0,0,0,2,2,0,0,0],"action":0,"action_name":"up","reward":0,"move_number":0(...TRUNCATED) |
10 | 128,812 | 8,192 | 5,323 | [{"board":[0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2],"action":0,"action_name":"up","reward":0,"move_number":0(...TRUNCATED) |
11 | 166,988 | 8,192 | 6,930 | [{"board":[0,2,0,0,0,0,0,0,0,2,0,0,0,0,0,0],"action":0,"action_name":"up","reward":0,"move_number":0(...TRUNCATED) |
12 | 177,564 | 8,192 | 7,476 | [{"board":[2,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0],"action":0,"action_name":"up","reward":0,"move_number":0(...TRUNCATED) |
15 | 133,056 | 8,192 | 5,615 | [{"board":[0,2,0,0,0,0,0,0,0,2,0,0,0,0,0,0],"action":0,"action_name":"up","reward":0,"move_number":0(...TRUNCATED) |
End of preview. Expand in Data Studio
2048 Expert Gameplay Dataset
State-action pairs from an expert N-Tuple Network agent playing 2048. Can be used for imitation learning / supervised training of 2048 agents.
Stats
- Source games: 10,000
- Games after filtering: 9,000
- Total moves: 54,010,983
- Average score: 143,847
- Win rate (>= 2048): 100% (losing games removed)
- Score floor: 62,152 (bottom 10% removed)
Files
train.jsonl- 8,100 games (48,592,790 moves) for trainingval.jsonl- 900 games (5,418,193 moves) for validationgames.jsonl- original unfiltered dataset (10,000 games)
Filtering
The raw dataset was filtered to improve supervised learning quality:
- Removed games that did not reach the 2048 tile (losing games with desperate end-moves)
- Removed bottom 10% by score (bad-luck games with messy board patterns)
- Split 90/10 into train/validation (every 10th kept game to validation)
Format
JSONL format, one game per line. Each game object:
{
"game_id": 0,
"score": 142056,
"max_tile": 8192,
"num_moves": 2341,
"moves": [
{
"board": [0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0],
"action": 0,
"action_name": "up",
"reward": 0,
"move_number": 0
}
]
}
board: 16 tile values in row-major order (0=empty, 2/4/8/...)action: 0=up, 1=right, 2=down, 3=leftreward: points from merges on this move
- Downloads last month
- 32