| --- |
| license: cc-by-4.0 |
| task_categories: |
| - video-classification |
| - other |
| language: |
| - en |
| tags: |
| - counter-strike |
| - cs2 |
| - esports |
| - hltv |
| - video |
| - audio |
| - parquet |
| - reinforcement-learning |
| pretty_name: "HLTV CS2 POV Rendered Dataset" |
| configs: |
| - config_name: previews |
| data_files: |
| - split: train |
| path: data/**/chunks-preview-*.parquet |
| default: true |
| - config_name: matches |
| data_files: |
| - split: train |
| path: index/manifest-*.parquet |
| - config_name: rounds |
| data_files: |
| - split: train |
| path: index/rounds-*.parquet |
| - config_name: chunks |
| data_files: |
| - split: train |
| path: data/**/chunks-full-*.parquet |
| --- |
| |
| # HLTV CS2 POV Rendered Dataset |
|
|
| Rendered Counter-Strike 2 POV training clips derived from `blanchon/cs2_dataset_demo`. |
| Each row is one one-minute-or-shorter player POV chunk. The default config is |
| `previews`, a small path-only overlay+audio video view for browsing. The heavy `chunks` config |
| contains full-resolution video/audio as loose files referenced by path, plus |
| embedded inputs and world-state streams. |
|
|
| The raw HLTV `.dem` files stay in the source dataset. This repo stores only the |
| rendered training dataset. |
|
|
| ## Configs |
|
|
| - `previews` (default): one low-resolution `preview_video` row per chunk with |
| the input overlay baked in. Preview MP4s are stored as loose files and the |
| Parquet stores only their relative paths, so filtering stays cheap. Each |
| preview directory also carries lightweight `inputs.preview.json` and |
| `world.preview.jsonl` sidecars sampled at 1 Hz for debugging/browsing without |
| loading the heavy `chunks` config. The |
| `preview_path` column is relative to the preview Parquet directory |
| (`previews/chunk_000123/preview.mp4`), while `preview_video.path` is an |
| `hf://datasets/<repo>@main/...` URI so Hugging Face's dataset viewer can |
| resolve it without inlining bytes. |
| - `matches`: one row per rendered `(match_id, map_name)` with team/event/date |
| metadata. |
| - `rounds`: one row per rendered `(match_id, map_name, round)` with tick |
| boundaries. |
| - `chunks`: full training rows with path-only `video`/`audio`, embedded |
| `inputs`/`worlds`, and typed metadata. |
|
|
| ## Filesystem Layout |
|
|
| ``` |
| data/ |
| match_id=<match_id>/map_name=<map_name>/player=<player>/ |
| chunks-preview-<machine>-<uuid>.parquet |
| chunks-full-<machine>-<uuid>.parquet |
| chunks/ |
| chunk_<ordinal>/ |
| video.mp4 |
| audio.wav |
| previews/ |
| chunk_<ordinal>/ |
| preview.mp4 |
| inputs.preview.json |
| world.preview.jsonl |
| index/ |
| manifest-<machine>-<uuid>.parquet |
| rounds-<machine>-<uuid>.parquet |
| state/ |
| processed/<input_metadata_stem>/<match_id>.json |
| failed/... |
| skipped/... |
| ``` |
|
|
| Every machine writes a unique `<machine>-<uuid>` shard, so parallel uploads do |
| not touch the same files. `state/processed` is written only after the shard |
| upload succeeds. Data files use Hive-style `key=value` directories for pruning. |
| The same keys are also stored inside the Parquets so the Hugging Face viewer and |
| `datasets.load_dataset("parquet", ...)` expose them even without applying Hive |
| partitioning. Parquets use best-effort bloom filters on hot filter columns. |
|
|
| ## Stream With `datasets` |
|
|
| ```python |
| from datasets import load_dataset |
| |
| repo = "blanchon/cs2_dataset_render" |
| |
| # Cheap default browsing view. |
| previews = load_dataset(repo, split="train", streaming=True) |
| for row in previews.take(3): |
| print(row["match_id"], row["round"], row["player"], row["preview_video"]) |
| |
| # Full training rows. Use columns/filters to avoid pulling unused bytes. |
| chunks = load_dataset( |
| repo, |
| "chunks", |
| split="train", |
| streaming=True, |
| columns=["video", "audio", "inputs", "worlds", "match_id", "round", "player"], |
| filters=[("player", "==", 0)], |
| ) |
| ``` |
|
|
| ## Query With DuckDB |
|
|
| ```sql |
| -- Match-level index only; no media bytes. |
| SELECT match_id, map_name, team1, team2, event, match_date |
| FROM 'hf://datasets/blanchon/cs2_dataset_render/index/manifest-*.parquet' |
| LIMIT 20; |
| |
| -- Round timing index only. |
| SELECT match_id, map_name, round, round_duration_ticks |
| FROM 'hf://datasets/blanchon/cs2_dataset_render/index/rounds-*.parquet' |
| WHERE round_duration_ticks > 3000; |
| |
| -- Preview rows for fast visual review. |
| SELECT match_id, map_name, round, player, chunk_index, primary_weapon |
| FROM 'hf://datasets/blanchon/cs2_dataset_render/data/**/chunks-preview-*.parquet' |
| WHERE player = 0 |
| LIMIT 20; |
| ``` |
|
|
| ## Partial Download |
|
|
| ```bash |
| hf download blanchon/cs2_dataset_render --repo-type dataset \ |
| --include "index/*.parquet" |
| |
| hf download blanchon/cs2_dataset_render --repo-type dataset \ |
| --include "data/match_id=2393343/**/chunks-preview-*.parquet" |
| |
| hf download blanchon/cs2_dataset_render --repo-type dataset \ |
| --include "data/match_id=2393343/map_name=de_ancient/player=0/chunks-full-*.parquet" |
| ``` |
|
|
| ## Row Semantics |
|
|
| - `player` is the stable canonical 0-9 player index for a match. |
| - `spec_slot` is the transient CS2 spectator slot used only to record the POV. |
| - Recording starts at the playable round start (`freeze_end_tick`) and stops at |
| the player's death tick, or at round end for survivors. |
| - The production worker validates exactly 10 canonical players per round, |
| unique spec-slot resolution, no recording past death, and valid video/audio/ |
| inputs/world sidecars before upload. |
|
|