Dataset Viewer

The dataset viewer should be available soon. Please retry later.

OpenCS2 - POV Renders WebDataset

OpenCS2

Browse with the OpenCS2 Viewer - every match, map and round, with all 10 player POVs synced on one timeline.

Tick-aligned Counter-Strike 2 POV training clips, rendered from blanchon/cs2_dataset_demo. Each sample is one player's perspective for one round; ten POVs per round share the same tick clock.

Per POV round:

  • Video - 1280x720 @ 32 fps, near-lossless H.264, faststart, muxed with audio.
  • Audio - per-player stereo, mixed from that player's position and orientation.
  • Inputs - every tick: keys, mouse delta, view angles, fire/jump/use, weapon switches.
  • World state - every tick: player position, velocity, view, health, armor, weapon, alive flag.

This repo is the WebDataset packaging of blanchon/opencs2_dataset: the same POV rounds, grouped into large uncompressed tar shards with byte-offset indexes for fast streaming and sparse random access.

Current build: 169,960 POV samples (3,135.1 POV video hours, 313.5 synced round-timeline hours) across 2,711 uncompressed tar shards.

The lightweight preview WebDataset is separate: blanchon/opencs2_dataset_preview_wds.

Usage

The media-heavy training data is in tar shards; metadata/configs stay as parquet so filtering is cheap before media access.

Config Row Use
train (default) WebDataset samples: mp4 + ticks.parquet + json high-throughput training, sequential shard streaming
wds_samples one row per (match_id, map_name, round, player_slot) with tar byte offsets random access, exact MP4/ticks range reads, download-size estimates
wds_shards one row per tar shard shard scheduling, cache planning, WIDS setup
pov_rounds one row per player POV round with original loose media paths filtering, compatibility with the base dataset
matches one row per (match_id, map_name) with team/event metadata match/map filtering
rounds one row per round with tick boundaries and round outcome round filtering
kills, duels, clip_events, round_player analytical event tables mining clips such as AWP 1v1s, clutches, smoke kills
ticks map-level tick/input/world-state parquet files position/input/world-state filtering before media access
enums enum lookup table mapping compact *_id columns back to labels

Layout

shards/
  opencs2-<match_id>-<map_name>-<shard>.train.tar
index/
  wids_train.json          # WIDS shard descriptor
  wds_samples.parquet      # one row per POV sample, with tar byte offsets
  wds_shards.parquet       # one row per tar shard
  matches.parquet          # one row per rendered match/map
  rounds.parquet           # one row per round
  pov_rounds.parquet       # one row per player POV round
events/
  kills.parquet
  duels.parquet
  clip_events.parquet
  round_player.parquet
metadata/
  enums.parquet
ticks/
  match_id=<id>/map_name=<map>/ticks.parquet

The tar shards are plain .tar, not .tar.gz, so byte offsets stay valid. A sample member set looks like:

2391545-de_anubis-r01-p00.mp4
2391545-de_anubis-r01-p00.ticks.parquet
2391545-de_anubis-r01-p00.json

Parquet Tables

String-like filter columns are dictionary encoded where useful, and most have a matching *_id column for fast integer joins or enum-based modeling. Player identity is always player_slot (0..9), not Steam ID or username.

File Rows Purpose Main schema
index/wds_samples.parquet 169,960 WebDataset sample index and byte offsets media_id, match_id, map_name, round, player_slot, duration_s, frames, width, height, sample_key, shard_path, shard_size, mp4_member, mp4_offset, mp4_size, ticks_member, ticks_offset, ticks_size, json_member, json_offset, json_size
index/wds_shards.parquet 2,711 Shard inventory shard_path, shard_size, n_samples, round_min, round_max, payload_bytes_sum, match_ids, map_names, player_slots
index/pov_rounds.parquet 169,960 One row per player POV round match keys, side/weapon summary, capture ticks, death/survival, dimensions, duration_s, video_frames, original video path, media_bytes, original preview path/bytes, ticks_parquet_path
index/matches.parquet 817 One row per match/map match_id, map_name, map_index, hltv_demo_id, match_url, event, teams, scores, winner, format, stars, match_date, rounds_played
index/rounds.parquet 16,984 One row per round round tick boundaries, duration, winner/reason/bomb site, kill counts, side counts, opening kill summary, had_clutch_context, had_1v1
events/kills.parquet 115,894 One row per kill attacker/victim slots and sides, tick, event_seconds, weapon/class, hit details, alive counts before/after, trade/1v1/clutch/opening flags
events/duels.parquet 115,894 Kill events normalized as winner/loser duels winner/loser slots and sides, weapon/class, distance, damage, hit details, alive counts, trade/1v1/clutch/opening flags
events/clip_events.parquet 115,894 Generic event table for clip mining event_type, target/other slots, event_seconds, weapon/class, boolean flags such as headshot, through_smoke, one_v_one, clutch_context
events/round_player.parquet 172,935 Per-player round stats match keys, player_slot, start side, kills, deaths, assists, headshots, kast
ticks/**/*.parquet map-level Tick/input/world-state index outside the tar shards media_id, round, player_slot, tick, t, input button lists, view angles, weapon, health/armor, position, velocity
metadata/enums.parquet 115 Enum lookup enum_name, enum_id, value

Tick column t is the timestamp in the POV video. In the current event tables, use event_video_seconds = event_seconds * 2.0 for media seeking, or join event tick against the selected POV ticks.parquet and use tick column t. Use media_bytes or mp4_size to estimate download cost before touching media.

Install

uv pip install duckdb pyarrow pandas requests huggingface_hub torch torchcodec pillow webdataset wids

For metadata-only work you only need duckdb, pyarrow, and huggingface_hub.

Filter Without Downloading Media

Use DuckDB over the parquet files. This only downloads the selected parquet row groups, not MP4s.

import duckdb

con = duckdb.connect()
con.sql("INSTALL httpfs; LOAD httpfs;")

awp_1v1 = con.sql("""
SELECT
  d.match_id,
  d.map_name,
  d.round,
  d.winner_player_slot AS player_slot,
  d.event_seconds AS event_table_seconds,
  d.event_seconds * 2.0 AS event_video_seconds,
  d.weapon,
  s.shard_path,
  s.mp4_offset,
  s.mp4_size,
  s.ticks_offset,
  s.ticks_size
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/duels.parquet' AS d
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/wds_samples.parquet' AS s
  ON d.match_id = s.match_id
 AND d.map_name = s.map_name
 AND d.round = s.round
 AND d.winner_player_slot = s.player_slot
WHERE d.weapon = 'awp'
  AND d.is_1v1_before
""").df()

print(awp_1v1.head())
print("estimated MP4 bytes:", int(awp_1v1["mp4_size"].sum()))

Other useful filters:

-- Long Mirage rounds with a bomb plant.
SELECT *
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/index/rounds.parquet'
WHERE map_name = 'de_mirage' AND has_bomb_plant AND round_duration_s > 60;

-- All kills through smoke, with killer POV.
SELECT k.*, s.shard_path, s.mp4_offset, s.mp4_size
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/kills.parquet' k
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/wds_samples.parquet' s
  ON k.match_id = s.match_id
 AND k.map_name = s.map_name
 AND k.round = s.round
 AND k.attacker_player_slot = s.player_slot
WHERE k.through_smoke;

Verified Clip Recipes

These filters were tested by exporting 10 local examples each. For kill-derived examples, center the clip on event_seconds * 2.0 with the current event tables.

Common helper:

This helper writes video-only MP4 clips through TorchCodec. It decodes the selected range as a PyTorch uint8 tensor, then encodes it back to H.264 MP4.

import json
import re
from pathlib import Path

import duckdb
from huggingface_hub import hf_hub_url
from PIL import Image
from torchcodec.decoders import VideoDecoder
from torchcodec.encoders import VideoEncoder

REPO = "blanchon/opencs2_dataset_wds"
OUT = Path("opencs2_examples")
FPS = 32.0

def hf_path_to_url(path):
    repo_id, revision, filename = re.match(r"hf://datasets/([^@]+)@([^/]+)/(.+)", path).groups()
    return hf_hub_url(repo_id=repo_id, repo_type="dataset", revision=revision, filename=filename)

def open_mp4(row):
    return hf_path_to_url(row["video_path"])

def save_clip(row, name, before=5.0, after=5.0):
    center = float(row["event_video_seconds"])
    start = max(0.0, center - before)
    stop = min(float(row["duration_s"]), center + after)
    out = OUT / name / f"{row['event_id']}.mp4"
    out.parent.mkdir(parents=True, exist_ok=True)
    frames = VideoDecoder(
        open_mp4(row),
        seek_mode="approximate",
        dimension_order="NCHW",
    ).get_frames_played_in_range(start_seconds=start, stop_seconds=stop, fps=FPS)
    VideoEncoder(frames.data, frame_rate=FPS).to_file(
        out,
        codec="libx264",
        pixel_format="yuv420p",
        crf=20,
        preset="veryfast",
        extra_options={"x264-params": "keyint=32:min-keyint=1:scenecut=0:open-gop=0"},
    )
    return out

def save_png(frame_hwc, path):
    Image.fromarray(frame_hwc.cpu().numpy()).save(path)

def save_frame_pair(row, name):
    out = OUT / name / f"{row['media_id']}-{int(row['tick'])}"
    out.mkdir(parents=True, exist_ok=True)
    frames = VideoDecoder(
        open_mp4(row),
        seek_mode="approximate",
        dimension_order="NHWC",
    ).get_frames_played_at(seconds=[float(row["t"]), float(row["next_t"])])
    frame_t = frames.data[0]
    frame_t1 = frames.data[1]

    save_png(frame_t, out / "frame_t.png")
    save_png(frame_t1, out / "frame_t_plus_1.png")

    tick_t = {k: v for k, v in row.items() if not k.startswith("next_") and k != "video_path"}
    tick_t_plus_1 = {**tick_t, "tick": int(row["next_tick"]), "t": float(row["next_t"])}
    (out / "tick_t.json").write_text(json.dumps(tick_t, indent=2, default=str) + "\n")
    (out / "tick_t_plus_1.json").write_text(json.dumps(tick_t_plus_1, indent=2, default=str) + "\n")
    return out

con = duckdb.connect()
con.sql("INSTALL httpfs; LOAD httpfs;")
AWP 1v1 duel, winner POV
rows = con.sql("""
SELECT d.duel_id AS event_id, d.event_seconds * 2.0 AS event_video_seconds,
       d.weapon, d.distance, d.headshot, p.duration_s,
       struct_extract(p.video, 'path') AS video_path
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/duels.parquet' d
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON d.match_id=p.match_id AND d.map_name=p.map_name AND d.round=p.round
 AND d.winner_player_slot=p.player_slot
WHERE d.weapon='awp' AND d.is_1v1_before
  AND p.duration_s >= d.event_seconds * 2.0 + 5.0
ORDER BY d.event_seconds
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_clip(row, "awp_1v1_duel")
Kill through smoke, attacker POV
rows = con.sql("""
SELECT k.kill_id AS event_id, k.event_seconds * 2.0 AS event_video_seconds,
       k.weapon, k.distance, k.headshot, p.duration_s,
       struct_extract(p.video, 'path') AS video_path
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/kills.parquet' k
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON k.match_id=p.match_id AND k.map_name=p.map_name AND k.round=p.round
 AND k.attacker_player_slot=p.player_slot
WHERE k.through_smoke
  AND p.duration_s >= k.event_seconds * 2.0 + 5.0
ORDER BY k.event_seconds
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_clip(row, "kill_through_smoke")
Noscope or wallbang highlight
rows = con.sql("""
SELECT k.kill_id AS event_id, k.event_seconds * 2.0 AS event_video_seconds,
       k.weapon, k.noscope, k.wallbang, k.penetrated, p.duration_s,
       struct_extract(p.video, 'path') AS video_path
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/kills.parquet' k
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON k.match_id=p.match_id AND k.map_name=p.map_name AND k.round=p.round
 AND k.attacker_player_slot=p.player_slot
WHERE (k.noscope OR k.wallbang OR k.penetrated > 0)
  AND p.duration_s >= k.event_seconds * 2.0 + 5.0
ORDER BY k.noscope DESC, k.wallbang DESC, k.penetrated DESC
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_clip(row, "noscope_wallbang")
Knife kill
rows = con.sql("""
SELECT k.kill_id AS event_id, k.event_seconds * 2.0 AS event_video_seconds,
       k.weapon, p.duration_s, struct_extract(p.video, 'path') AS video_path
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/kills.parquet' k
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON k.match_id=p.match_id AND k.map_name=p.map_name AND k.round=p.round
 AND k.attacker_player_slot=p.player_slot
WHERE (lower(k.weapon_class)='knife' OR lower(k.weapon) LIKE '%knife%'
       OR lower(k.weapon) LIKE '%bayonet%' OR lower(k.weapon) LIKE '%karambit%')
  AND p.duration_s >= k.event_seconds * 2.0 + 5.0
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_clip(row, "knife_kill")
Five kills by the same player in under 10 seconds
rows = con.sql("""
WITH streaks AS (
  SELECT match_id, map_name, round, attacker_player_slot AS player_slot,
         COUNT(*) AS n_kills,
         MIN(event_seconds) * 2.0 AS first_kill_video_seconds,
         MAX(event_seconds) * 2.0 AS last_kill_video_seconds
  FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/kills.parquet'
  GROUP BY match_id, map_name, round, attacker_player_slot
  HAVING COUNT(*) >= 5 AND MAX(event_seconds) - MIN(event_seconds) < 10.0
)
SELECT concat('streak-', s.match_id, '-', s.map_name, '-r', s.round, '-p', s.player_slot) AS event_id,
       s.first_kill_video_seconds AS event_video_seconds,
       s.last_kill_video_seconds, s.n_kills, p.duration_s,
       struct_extract(p.video, 'path') AS video_path
FROM streaks s
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON s.match_id=p.match_id AND s.map_name=p.map_name AND s.round=p.round
 AND s.player_slot=p.player_slot
ORDER BY s.last_kill_video_seconds - s.first_kill_video_seconds
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_clip(row, "five_kills_under_10s", before=2.0, after=row["last_kill_video_seconds"] - row["event_video_seconds"] + 2.0)
Very long distance kill
rows = con.sql("""
SELECT k.kill_id AS event_id, k.event_seconds * 2.0 AS event_video_seconds,
       k.weapon, k.distance, p.duration_s, struct_extract(p.video, 'path') AS video_path
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/events/kills.parquet' k
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON k.match_id=p.match_id AND k.map_name=p.map_name AND k.round=p.round
 AND k.attacker_player_slot=p.player_slot
WHERE k.distance IS NOT NULL
  AND p.duration_s >= k.event_seconds * 2.0 + 5.0
ORDER BY k.distance DESC
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_clip(row, "long_distance_kill")
Specific map position, video clip
rows = con.sql("""
WITH t AS (
  SELECT DISTINCT ON (match_id) media_id, match_id, map_name, round, player_slot,
         t AS event_video_seconds, x, y, z, input_weapon
  FROM 'hf://datasets/blanchon/opencs2_dataset_wds/ticks/match_id=*/map_name=de_ancient/ticks.parquet'
  WHERE is_alive AND tick % 64 = 0 AND t >= 5.0
    AND x BETWEEN 1000 AND 1250 AND y BETWEEN -1000 AND -750
  ORDER BY match_id, t
)
SELECT concat('pos-', t.media_id, '-', round(t.event_video_seconds, 2)) AS event_id,
       t.*, p.duration_s, struct_extract(p.video, 'path') AS video_path
FROM t
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON t.media_id=p.media_id
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_clip(row, "position_based_clip")
boosting_top_player: higher player POV

Use ticks to find a higher player above a nearby lower player for multiple consecutive ticks. This is a heuristic, so visually inspect results.

-- Core condition used in the verified examples:
xy_distance < 36
z_delta BETWEEN 45 AND 90
abs(top.velocity_z) < 12
abs(lower.velocity_z) < 12
support_ticks >= 16
frame_pair_preview: frame pair at a specific position
rows = con.sql("""
WITH t AS (
  SELECT media_id, match_id, map_name, round, player_slot, tick, t, x, y, z
  FROM 'hf://datasets/blanchon/opencs2_dataset_wds/ticks/match_id=*/map_name=de_ancient/ticks.parquet'
  WHERE is_alive AND t > 5.0 AND tick % 64 = 0
    AND x BETWEEN 1000 AND 1250 AND y BETWEEN -1000 AND -750
),
pairs AS (
  SELECT DISTINCT ON (a.match_id) a.*, b.tick AS next_tick, b.t AS next_t
  FROM t a JOIN t b ON a.media_id=b.media_id AND a.tick + 2 = b.tick
  ORDER BY a.match_id, a.t
)
SELECT pairs.*, struct_extract(p.video, 'path') AS video_path
FROM pairs
JOIN 'hf://datasets/blanchon/opencs2_dataset_wds/index/pov_rounds.parquet' p
  ON pairs.media_id=p.media_id
LIMIT 10
""").df()

for row in rows.to_dict("records"):
    save_frame_pair(row, "frame_pair_preview")

Read One POV Or One Timestamp

index/wds_samples.parquet stores the byte range of each MP4 inside its tar shard. The MP4 bytes are identical to a standalone MP4; the reader below shifts all HTTP range requests by the tar member offset.

import io
import os
import requests
from huggingface_hub import hf_hub_url
from torchcodec.decoders import AudioDecoder, VideoDecoder

REPO = "blanchon/opencs2_dataset_wds"

class HfTarMember(io.RawIOBase):
    def __init__(self, shard_url, offset, size, token=None, session=None):
        self.shard_url = shard_url
        self.offset = int(offset)
        self.size = int(size)
        self.pos = 0
        self.session = session or requests.Session()
        self.headers = {}
        token = token or os.environ.get("HF_TOKEN")
        if token:
            self.headers["Authorization"] = f"Bearer {token}"

    def readable(self):
        return True

    def seekable(self):
        return True

    def tell(self):
        return self.pos

    def seek(self, offset, whence=io.SEEK_SET):
        if whence == io.SEEK_SET:
            self.pos = offset
        elif whence == io.SEEK_CUR:
            self.pos += offset
        elif whence == io.SEEK_END:
            self.pos = self.size + offset
        self.pos = max(0, min(self.pos, self.size))
        return self.pos

    def read(self, n=-1):
        if self.pos >= self.size:
            return b""
        if n is None or n < 0:
            n = self.size - self.pos
        n = min(n, self.size - self.pos)
        start = self.offset + self.pos
        stop = start + n - 1
        headers = dict(self.headers)
        headers["Range"] = f"bytes={start}-{stop}"
        r = self.session.get(self.shard_url, headers=headers, timeout=60)
        r.raise_for_status()
        data = r.content
        self.pos += len(data)
        return data

def member_url(row):
    return hf_hub_url(REPO, row["shard_path"], repo_type="dataset")

def open_mp4(row):
    return HfTarMember(member_url(row), row["mp4_offset"], row["mp4_size"])

# row can come from DuckDB, pandas, or pyarrow.
row = awp_1v1.iloc[0].to_dict()
start = max(0.0, row["event_video_seconds"] - 5.0)
stop = row["event_video_seconds"] + 5.0

video = VideoDecoder(open_mp4(row), seek_mode="approximate", dimension_order="NHWC")
clip = video.get_frames_played_in_range(start_seconds=start, stop_seconds=stop)

audio = AudioDecoder(open_mp4(row))
samples = audio.get_samples_played_in_range(start_seconds=start, stop_seconds=stop)

For a browser viewer, use the same shard_path, mp4_offset, and mp4_size: create a URL source for the shard, slice [mp4_offset, mp4_offset + mp4_size), then give the slice to the MP4 demuxer.

Read The Tick Sidecar

Each WDS sample also contains its per-POV tick parquet. Fetch it by range from the same tar shard:

import pyarrow as pa
import pyarrow.parquet as pq

def read_member_bytes(row, offset_col, size_col):
    f = HfTarMember(member_url(row), row[offset_col], row[size_col])
    return f.read()

tick_bytes = read_member_bytes(row, "ticks_offset", "ticks_size")
ticks = pq.read_table(pa.BufferReader(tick_bytes)).to_pandas()

For global filtering by position, weapon, or health across many samples, use the external ticks/match_id=<id>/map_name=<map>/ticks.parquet files instead. They are map-level parquet indexes and avoid opening tar shards during the filtering phase.

SELECT media_id, round, player_slot, tick, t, x, y, z, input_weapon
FROM 'hf://datasets/blanchon/opencs2_dataset_wds/ticks/match_id=2391545/map_name=de_anubis/ticks.parquet'
WHERE x BETWEEN -500 AND 500
  AND y BETWEEN -2000 AND -1200
  AND is_alive;

Frame Pair Samples

For (frame_t, tick_t, frame_t+1, tick_t+1), use the video WDS and decode frames on demand. This keeps storage smaller than a pre-extracted frame dataset while preserving arbitrary temporal access.

import numpy as np

t0 = 12.0
t1 = t0 + 1.0 / 32.0

tick0 = ticks.iloc[(ticks["t"] - t0).abs().argmin()]
tick1 = ticks.iloc[(ticks["t"] - t1).abs().argmin()]

frames = VideoDecoder(open_mp4(row), seek_mode="approximate", dimension_order="NHWC").get_frames_played_at(
    seconds=[float(tick0["t"]), float(tick1["t"])]
)

frame_t = frames.data[0]
frame_t1 = frames.data[1]

For throughput, sample several timestamps from the same POV and call get_frames_played_at() once with the full timestamp list; reopening the decoder for each frame pair is much slower.

High-Throughput Training

Use the parquet tables to build the sample set first, then feed only selected shards/samples to the loader. The fastest pattern depends on access shape:

  • full or mostly-full rounds: use WebDataset/WIDS with a local NVMe shard cache;
  • sparse 10 second clips: use wds_samples.parquet byte offsets and TorchCodec range reads;
  • frame pairs: group many requested timestamps by media_id, decode them in batches, then shuffle the emitted pairs.

Recommended randomness strategy:

  1. shuffle shards each epoch;
  2. shuffle samples within each shard;
  3. keep a bounded cross-shard sample buffer, for example 64-256 samples per worker;
  4. group nearby timestamps from the same media_id before decoding, then shuffle outputs after decode.

WIDS descriptor:

from huggingface_hub import hf_hub_url
import wids

index_url = hf_hub_url(
    "blanchon/opencs2_dataset_wds",
    "index/wids_train.json",
    repo_type="dataset",
)

ds = wids.ShardListDataset(
    index_url,
    cache_dir="/local_nvme/opencs2_wids",
    lru_size=16,
)
sample = ds[0]

Classic streaming WebDataset:

import pyarrow.parquet as pq
import webdataset as wds
from huggingface_hub import hf_hub_download, hf_hub_url

repo = "blanchon/opencs2_dataset_wds"
index_path = hf_hub_download(repo, "index/wds_shards.parquet", repo_type="dataset")
shard_paths = pq.read_table(index_path, columns=["shard_path"]).column("shard_path").to_pylist()
urls = [hf_hub_url(repo, path, repo_type="dataset") for path in shard_paths]

dataset = (
    wds.WebDataset(urls, shardshuffle=True)
    .shuffle(128)
)

For this dataset, prefer the explicit shard list from index/wds_shards.parquet or index/wids_train.json over a brace pattern: shard names include match IDs and map names.

Downloading

Metadata only:

hf download blanchon/opencs2_dataset_wds --repo-type dataset \
  --include "index/*.parquet" \
  --include "events/*.parquet" \
  --include "metadata/*.parquet"

One shard:

hf download blanchon/opencs2_dataset_wds --repo-type dataset \
  --include "shards/opencs2-2391545-de_anubis-000000.train.tar"

For programmatic URL construction, use huggingface_hub.hf_hub_url() for range reads and DuckDB hf://datasets/... URLs for parquet scans.

Creation

Built from HLTV .dem files with a headless CS2 recorder. The recorder replays demos, captures all 10 player POVs, validates tick/frame boundaries, muxes audio into the MP4, and writes typed parquet sidecars. This WebDataset repo repackages the round-based media from blanchon/opencs2_dataset into large tar shards plus byte-offset indexes to avoid the many-small-files problem.

Licensing

.dem source data is mirrored from HLTV; downstream use is bound by the original tournament terms. Renders and metadata are released as CC-BY-4.0.

Citation

@misc{blanchon2026opencs2,
  author       = {Julien Blanchon},
  title        = {OpenCS2 Dataset},
  year         = {2026},
  publisher    = {Hugging Face},
  howpublished = {\url{https://github.com/julien-blanchon/opencs2-dataset}}
}
Downloads last month
30