File size: 3,247 Bytes
b2c47c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ce32d4
ef1e3b2
 
8ce32d4
ef1e3b2
 
 
b2c47c0
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
dataset_info:
  features:
    - name: image_bytes
      dtype: binary
    - name: action
      dtype: string
    - name: game
      dtype: string
    - name: trial_id
      dtype: int32
    - name: frame_idx
      dtype: int32
    - name: image_size
      dtype: int32
license: mit
task_categories:
  - robotics
  - reinforcement-learning
tags:
  - atari
  - vla
  - vision-language-action
  - imitation-learning
  - preprocessed
  - smolvlm
size_categories:
  - 1M<n<10M
---

# TESS-Atari Stage 1 - Preprocessed (15Hz, 384x384)

**Training-ready** version of the 15Hz dataset with images pre-resized to 384x384 (SmolVLM native resolution).

## Overview

| Metric | Value |
|--------|-------|
| Source | [TESS-Computer/atari-vla-stage1-15hz](https://huggingface.co/datasets/TESS-Computer/atari-vla-stage1-15hz) |
| Samples | 1,340,293 |
| Image Size | 384x384 (pre-resized) |
| Action Rate | 15 Hz (3 actions per observation) |
| Format | Lumine-style action tokens |

## Why Preprocessed?

Training VLMs requires resizing images to the model's native resolution. Doing this on-the-fly creates a CPU bottleneck. This dataset has images **already resized**, giving ~10x faster training:

```
Raw dataset:     160x210 → resize during training → slow (CPU bound)
Preprocessed:    384x384 → ready to use → fast (GPU saturated)
```

## Action Format

```
<|action_start|> RIGHT ; RIGHT ; FIRE <|action_end|>
<|action_start|> LEFT ; LEFT ; LEFT <|action_end|>
<|action_start|> NOOP ; UP ; UPFIRE <|action_end|>
```

## Schema

| Field | Type | Description |
|-------|------|-------------|
| `image_bytes` | bytes | PNG at 384x384 (pre-resized) |
| `action` | string | Lumine-style chunked action token |
| `game` | string | Game name |
| `trial_id` | int | Human player trial number |
| `frame_idx` | int | Frame index in trial |
| `image_size` | int | Always 384 |

## Usage

```python
from datasets import load_dataset
from PIL import Image
from io import BytesIO

# Load preprocessed dataset
ds = load_dataset("TESS-Computer/tess-atari-15hz-384", split="train")

# Images are already 384x384 - no resizing needed!
sample = ds[0]
img = Image.open(BytesIO(sample["image_bytes"]))
print(img.size)  # (384, 384)
print(sample["action"])  # <|action_start|> LEFT ; LEFT ; LEFT <|action_end|>
```

## Training

```bash
python scripts/train_v2.py \
    --preprocessed TESS-Computer/tess-atari-15hz-384 \
    --epochs 3 \
    --batch-size 4 \
    --grad-accum 32 \
    --wandb \
    --push-to-hub
```

## Related

- [Raw 15Hz dataset](https://huggingface.co/datasets/TESS-Computer/atari-vla-stage1-15hz) - Original with 160x210 images
- [Raw 5Hz dataset](https://huggingface.co/datasets/TESS-Computer/atari-vla-stage1-5hz) - Single action per observation
- [TESS-Atari repo](https://github.com/HusseinLezzaik/TESS-Atari) - Training code

## Citation

```bibtex
@misc{tessatari2025,
  title={TESS-Atari: Vision-Language-Action Models for Atari Games},
  author={Lezzaik, Hussein},
  year={2025},
  url={https://github.com/HusseinLezzaik/TESS-Atari}
}

@misc{atarihead2019,
  title={Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset},
  author={Zhang, Ruohan and others},
  year={2019},
  url={https://zenodo.org/records/3451402}
}
```