File size: 3,708 Bytes
b8c2bd0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
dataset_info:
features:
- name: id
dtype: string
- name: game
dtype: string
- name: trial_id
dtype: int32
- name: episode_id
dtype: int32
- name: chunk_idx
dtype: int32
- name: frame_start
dtype: int32
- name: action
dtype: string
- name: action_ints
dtype: string
- name: score
dtype: int32
- name: reward_sum
dtype: int32
- name: gaze_positions
dtype: string
- name: image_bytes
dtype: binary
license: mit
task_categories:
- robotics
- reinforcement-learning
tags:
- atari
- vla
- vision-language-action
- imitation-learning
- human-demonstrations
- action-chunking
size_categories:
- 1M<n<10M
---
# TESS-Atari Stage 1 (15Hz)
Human gameplay demonstrations from Atari games with **action chunking**, formatted for Vision-Language-Action (VLA) model training.
## Overview
| Metric | Value |
|--------|-------|
| Source | [Atari-HEAD](https://zenodo.org/records/3451402) |
| Games | 11 (overlapping with DIAMOND benchmark) |
| Samples | ~1.3M |
| Observation Rate | 5 Hz |
| Action Rate | 15 Hz (3 actions per observation) |
| Format | Lumine-style action tokens |
## Why Action Chunking?
VLA models run at ~5 Hz inference speed, but Atari runs at 15 Hz (with frame_skip=4). Action chunking predicts 3 actions at once, matching the game's effective action rate while accommodating slower model inference.
```
Observation (5 Hz) → VLA → 3 Actions (executed at 15 Hz)
```
## Games Included
Alien, Asterix, BankHeist, Breakout, DemonAttack, Freeway, Frostbite, Hero, MsPacman, RoadRunner, Seaquest
## Action Format
```
<|action_start|> RIGHT ; RIGHT ; FIRE <|action_end|>
<|action_start|> LEFT ; LEFT ; LEFT <|action_end|>
<|action_start|> NOOP ; UP ; UPFIRE <|action_end|>
```
## Schema
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique sample ID: `{game}_{trial}_{chunk}` |
| `game` | string | Game name (lowercase) |
| `trial_id` | int | Human player trial number |
| `episode_id` | int | Episode within trial (-1 if unknown) |
| `chunk_idx` | int | Chunk sequence number |
| `frame_start` | int | First frame index of this chunk |
| `action` | string | Lumine-style chunked action token |
| `action_ints` | string | Raw ALE codes comma-separated: "4,4,1" |
| `score` | int | Score at chunk start |
| `reward_sum` | int | Total reward over 3 frames |
| `gaze_positions` | string | Eye tracking from first frame |
| `image_bytes` | bytes | PNG of first frame in chunk |
## Usage
```python
from datasets import load_dataset
ds = load_dataset("TESS-Computer/atari-vla-stage1-15hz")
# Get a sample
sample = ds["train"][0]
print(sample["action"]) # <|action_start|> RIGHT ; RIGHT ; FIRE <|action_end|>
# Parse individual actions
actions = sample["action_ints"].split(",") # ["4", "4", "1"]
# Decode image
from PIL import Image
from io import BytesIO
img = Image.open(BytesIO(sample["image_bytes"]))
```
## Evaluation
Use with [DIAMOND](https://diamond-wm.github.io/) world models (frame_skip=4). Execute the 3 predicted actions sequentially at each observation step.
## Related
- [5Hz variant](https://huggingface.co/datasets/TESS-Computer/atari-vla-stage1-5hz) - Single action per observation (simpler but slower)
- [Lumine AI](https://www.lumine-ai.org/) - Inspiration for VLA architecture and action chunking
- [DIAMOND](https://diamond-wm.github.io/) - World model for evaluation
## Citation
```bibtex
@misc{atarihead2019,
title={Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset},
author={Zhang, Ruohan and others},
year={2019},
url={https://zenodo.org/records/3451402}
}
```
|