File size: 5,314 Bytes
8284e26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
license: cc-by-4.0
task_categories:
  - video-classification
  - other
language:
  - en
tags:
  - counter-strike
  - cs2
  - esports
  - hltv
  - video
  - audio
  - parquet
  - reinforcement-learning
pretty_name: "HLTV CS2 POV Rendered Dataset"
configs:
  - config_name: previews
    data_files:
      - split: train
        path: data/**/chunks-preview-*.parquet
    default: true
  - config_name: matches
    data_files:
      - split: train
        path: index/manifest-*.parquet
  - config_name: rounds
    data_files:
      - split: train
        path: index/rounds-*.parquet
  - config_name: chunks
    data_files:
      - split: train
        path: data/**/chunks-full-*.parquet
---

# HLTV CS2 POV Rendered Dataset

Rendered Counter-Strike 2 POV training clips derived from `blanchon/cs2_dataset_demo`.
Each row is one one-minute-or-shorter player POV chunk. The default config is
`previews`, a small path-only overlay+audio video view for browsing. The heavy `chunks` config
contains full-resolution video/audio as loose files referenced by path, plus
embedded inputs and world-state streams.

The raw HLTV `.dem` files stay in the source dataset. This repo stores only the
rendered training dataset.

## Configs

- `previews` (default): one low-resolution `preview_video` row per chunk with
  the input overlay baked in. Preview MP4s are stored as loose files and the
  Parquet stores only their relative paths, so filtering stays cheap. Each
  preview directory also carries lightweight `inputs.preview.json` and
  `world.preview.jsonl` sidecars sampled at 1 Hz for debugging/browsing without
  loading the heavy `chunks` config. The
  `preview_path` column is relative to the preview Parquet directory
  (`previews/chunk_000123/preview.mp4`), while `preview_video.path` is an
  `hf://datasets/<repo>@main/...` URI so Hugging Face's dataset viewer can
  resolve it without inlining bytes.
- `matches`: one row per rendered `(match_id, map_name)` with team/event/date
  metadata.
- `rounds`: one row per rendered `(match_id, map_name, round)` with tick
  boundaries.
- `chunks`: full training rows with path-only `video`/`audio`, embedded
  `inputs`/`worlds`, and typed metadata.

## Filesystem Layout

```
data/
  match_id=<match_id>/map_name=<map_name>/player=<player>/
    chunks-preview-<machine>-<uuid>.parquet
    chunks-full-<machine>-<uuid>.parquet
    chunks/
      chunk_<ordinal>/
        video.mp4
        audio.wav
    previews/
      chunk_<ordinal>/
        preview.mp4
        inputs.preview.json
        world.preview.jsonl
index/
  manifest-<machine>-<uuid>.parquet
  rounds-<machine>-<uuid>.parquet
state/
  processed/<input_metadata_stem>/<match_id>.json
  failed/...
  skipped/...
```

Every machine writes a unique `<machine>-<uuid>` shard, so parallel uploads do
not touch the same files. `state/processed` is written only after the shard
upload succeeds. Data files use Hive-style `key=value` directories for pruning.
The same keys are also stored inside the Parquets so the Hugging Face viewer and
`datasets.load_dataset("parquet", ...)` expose them even without applying Hive
partitioning. Parquets use best-effort bloom filters on hot filter columns.

## Stream With `datasets`

```python
from datasets import load_dataset

repo = "blanchon/cs2_dataset_render"

# Cheap default browsing view.
previews = load_dataset(repo, split="train", streaming=True)
for row in previews.take(3):
    print(row["match_id"], row["round"], row["player"], row["preview_video"])

# Full training rows. Use columns/filters to avoid pulling unused bytes.
chunks = load_dataset(
    repo,
    "chunks",
    split="train",
    streaming=True,
    columns=["video", "audio", "inputs", "worlds", "match_id", "round", "player"],
    filters=[("player", "==", 0)],
)
```

## Query With DuckDB

```sql
-- Match-level index only; no media bytes.
SELECT match_id, map_name, team1, team2, event, match_date
FROM 'hf://datasets/blanchon/cs2_dataset_render/index/manifest-*.parquet'
LIMIT 20;

-- Round timing index only.
SELECT match_id, map_name, round, round_duration_ticks
FROM 'hf://datasets/blanchon/cs2_dataset_render/index/rounds-*.parquet'
WHERE round_duration_ticks > 3000;

-- Preview rows for fast visual review.
SELECT match_id, map_name, round, player, chunk_index, primary_weapon
FROM 'hf://datasets/blanchon/cs2_dataset_render/data/**/chunks-preview-*.parquet'
WHERE player = 0
LIMIT 20;
```

## Partial Download

```bash
hf download blanchon/cs2_dataset_render --repo-type dataset \
  --include "index/*.parquet"

hf download blanchon/cs2_dataset_render --repo-type dataset \
  --include "data/match_id=2393343/**/chunks-preview-*.parquet"

hf download blanchon/cs2_dataset_render --repo-type dataset \
  --include "data/match_id=2393343/map_name=de_ancient/player=0/chunks-full-*.parquet"
```

## Row Semantics

- `player` is the stable canonical 0-9 player index for a match.
- `spec_slot` is the transient CS2 spectator slot used only to record the POV.
- Recording starts at the playable round start (`freeze_end_tick`) and stops at
  the player's death tick, or at round end for survivors.
- The production worker validates exactly 10 canonical players per round,
  unique spec-slot resolution, no recording past death, and valid video/audio/
  inputs/world sidecars before upload.