--- license: mit task_categories: - robotics - image-to-text tags: - VLA - gaming - counter-strike - behavioral-cloning - imitation-learning - action-chunking size_categories: - 1M m1_x m1_y [keys1] ; m2_x m2_y [keys2] ; m3_x m3_y [keys3] <|action_end|> ``` **Examples:** ``` <|action_start|> 0 0 ; 0 0 ; 0 0 <|action_end|> # idle <|action_start|> 5 0 W ; 3 0 W ; 4 0 W <|action_end|> # walking <|action_start|> -200 50 W L ; -50 10 L ; 10 0 W <|action_end|> # flick shot ``` Each chunk contains the exact mouse delta and keys for that frame - no aggregation. ## Schema | Column | Type | Description | |--------|------|-------------| | `id` | string | Unique sample ID | | `episode_id` | string | Source HDF5 file | | `chunk_idx` | int32 | Chunk number within episode | | `frame_idx` | int32 | Starting frame number | | `action` | string | Text-formatted 3-action chunk | | `kill_flag` | int32 | 1 if any kill in chunk | | `death_flag` | int32 | 1 if any death in chunk | | `split` | string | "train" or "test" | | `image_bytes` | bytes | JPEG screenshot (first frame) | ## Usage ```python from datasets import load_dataset # Load full dataset ds = load_dataset("TESS-Computer/csgo-vla-stage1-5hz") # Filter by split train_ds = ds.filter(lambda x: x['split'] == 'train') test_ds = ds.filter(lambda x: x['split'] == 'test') ``` ## Why 5Hz with Chunking? 1. **VLA inference speed:** 62ms (16Hz) is too fast for current VLMs. 200ms (5Hz) is achievable. 2. **No information loss:** Each chunk predicts exactly what the human did for 3 consecutive frames. 3. **World model sync:** Diamond executes `step(a1), step(a2), step(a3)` then returns frame to VLA. ## Related - [16Hz variant](https://huggingface.co/datasets/TESS-Computer/csgo-vla-stage1-16hz) - 1 action per frame - [Diamond World Model](https://github.com/eloialonso/diamond) - For evaluation - [Original Dataset](https://huggingface.co/datasets/TeaPearce/CounterStrike_Deathmatch)