HusseinLezzaik commited on
Commit
b2c47c0
·
verified ·
1 Parent(s): 2dec4cc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: image_bytes
5
+ dtype: binary
6
+ - name: action
7
+ dtype: string
8
+ - name: game
9
+ dtype: string
10
+ - name: trial_id
11
+ dtype: int32
12
+ - name: frame_idx
13
+ dtype: int32
14
+ - name: image_size
15
+ dtype: int32
16
+ license: mit
17
+ task_categories:
18
+ - robotics
19
+ - reinforcement-learning
20
+ tags:
21
+ - atari
22
+ - vla
23
+ - vision-language-action
24
+ - imitation-learning
25
+ - preprocessed
26
+ - smolvlm
27
+ size_categories:
28
+ - 1M<n<10M
29
+ ---
30
+
31
+ # TESS-Atari Stage 1 - Preprocessed (15Hz, 384x384)
32
+
33
+ **Training-ready** version of the 15Hz dataset with images pre-resized to 384x384 (SmolVLM native resolution).
34
+
35
+ ## Overview
36
+
37
+ | Metric | Value |
38
+ |--------|-------|
39
+ | Source | [TESS-Computer/atari-vla-stage1-15hz](https://huggingface.co/datasets/TESS-Computer/atari-vla-stage1-15hz) |
40
+ | Samples | 1,340,293 |
41
+ | Image Size | 384x384 (pre-resized) |
42
+ | Action Rate | 15 Hz (3 actions per observation) |
43
+ | Format | Lumine-style action tokens |
44
+
45
+ ## Why Preprocessed?
46
+
47
+ Training VLMs requires resizing images to the model's native resolution. Doing this on-the-fly creates a CPU bottleneck. This dataset has images **already resized**, giving ~10x faster training:
48
+
49
+ ```
50
+ Raw dataset: 160x210 → resize during training → slow (CPU bound)
51
+ Preprocessed: 384x384 → ready to use → fast (GPU saturated)
52
+ ```
53
+
54
+ ## Action Format
55
+
56
+ ```
57
+ <|action_start|> RIGHT ; RIGHT ; FIRE <|action_end|>
58
+ <|action_start|> LEFT ; LEFT ; LEFT <|action_end|>
59
+ <|action_start|> NOOP ; UP ; UPFIRE <|action_end|>
60
+ ```
61
+
62
+ ## Schema
63
+
64
+ | Field | Type | Description |
65
+ |-------|------|-------------|
66
+ | `image_bytes` | bytes | PNG at 384x384 (pre-resized) |
67
+ | `action` | string | Lumine-style chunked action token |
68
+ | `game` | string | Game name |
69
+ | `trial_id` | int | Human player trial number |
70
+ | `frame_idx` | int | Frame index in trial |
71
+ | `image_size` | int | Always 384 |
72
+
73
+ ## Usage
74
+
75
+ ```python
76
+ from datasets import load_dataset
77
+ from PIL import Image
78
+ from io import BytesIO
79
+
80
+ # Load preprocessed dataset
81
+ ds = load_dataset("TESS-Computer/tess-atari-15hz-384", split="train")
82
+
83
+ # Images are already 384x384 - no resizing needed!
84
+ sample = ds[0]
85
+ img = Image.open(BytesIO(sample["image_bytes"]))
86
+ print(img.size) # (384, 384)
87
+ print(sample["action"]) # <|action_start|> LEFT ; LEFT ; LEFT <|action_end|>
88
+ ```
89
+
90
+ ## Training
91
+
92
+ ```bash
93
+ python scripts/train_v2.py \
94
+ --preprocessed TESS-Computer/tess-atari-15hz-384 \
95
+ --epochs 3 \
96
+ --batch-size 4 \
97
+ --grad-accum 32 \
98
+ --wandb \
99
+ --push-to-hub
100
+ ```
101
+
102
+ ## Related
103
+
104
+ - [Raw 15Hz dataset](https://huggingface.co/datasets/TESS-Computer/atari-vla-stage1-15hz) - Original with 160x210 images
105
+ - [Raw 5Hz dataset](https://huggingface.co/datasets/TESS-Computer/atari-vla-stage1-5hz) - Single action per observation
106
+ - [TESS-Atari repo](https://github.com/HusseinLezzaik/TESS-Atari) - Training code
107
+
108
+ ## Citation
109
+
110
+ ```bibtex
111
+ @misc{atarihead2019,
112
+ title={Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset},
113
+ author={Zhang, Ruohan and others},
114
+ year={2019},
115
+ url={https://zenodo.org/records/3451402}
116
+ }
117
+ ```