--- # Data-files config (Parquet-only) configs: - config_name: basic data_files: - split: test path: - "basic/test-*.parquet" - "basic/test.parquet" default: true - config_name: advanced data_files: - split: test path: - "advanced/test-*.parquet" - "advanced/test.parquet" pretty_name: "Gametime" tags: - audio - speech - tts - asr - benchmark task_categories: - automatic-speech-recognition - text-to-speech - audio-to-audio language: - en license: cc-by-4.0 size_categories: - n<100K --- # Gametime Benchmark The **Gametime** dataset provides lightweight, streaming-friendly splits for TTS/ASR/SpokenLM prototyping. For full details, please refer to the paper: 👉 [**Game-Time: Evaluating Temporal Dynamics in Spoken Language Models**](https://arxiv.org/abs/2509.26388) --- ## 📦 Download Options ### 1️⃣ Recommended — Full ZIP Download If you prefer the original folder layout you can download one of the ZIPs packaged in `gametime/download/`. There are two kinds available in this repository: * `gametime/download/basic_instructions.zip` — unpacks to: ``` basic_instructions/ ├── text/ │ ├── *-dataset.json # per-dataset JSON manifest(s) ├── audios/ │ ├── / │ │ └── test/*.wav ├── alignments/ # per-audio alignment files │ ├── / │ │ ├── .jsonl ``` * `gametime/download/advanced_instructions.zip` — unpacks to: ``` advanced_instructions/ ├── text/ │ ├── *-dataset.json # per-dataset JSON manifest(s) with timing tokens ├── audios/ │ ├── / │ │ └── test/*.wav ├── alignments/ # per-audio alignment files │ ├── / │ │ ├── .jsonl ``` Notes: * Each ZIP in `gametime/download/` preserves the original source tree names (`basic_instructions/` or `advanced_instructions/`). Download example (Hugging Face): ```python from huggingface_hub import hf_hub_download import os path = hf_hub_download( repo_id="gametime-benchmark/gametime", repo_type="dataset", filename="download/basic_instructions.zip", revision="main", local_dir=".", ) print("saved to:", path) ``` Unzip example: ```bash unzip gametime/download/basic_instructions.zip ``` --- ### 2️⃣ Optional — Stream from Hugging Face ```python from datasets import load_dataset import io import soundfile as sf # Load Basic train split ds_basic = load_dataset("gametime-benchmark/gametime", "basic", split="test", streaming=True) ex = next(iter(ds_basic)) buf = io.BytesIO(ex["audio_bytes"]) wav, sr = sf.read(buf, dtype="float32") print(ex["id"], sr, len(wav), ex["text"]) # Load Advanced test split ds_adv = load_dataset("gametime-benchmark/gametime", "advanced", split="test", streaming=True) ex_adv = next(iter(ds_adv)) buf_adv = io.BytesIO(ex_adv["audio_bytes"]) wav_adv, sr_adv = sf.read(buf_adv, dtype="float32") print(ex_adv["id"], sr_adv, len(wav_adv), ex_adv["text"]) ```` * Works with **`streaming=True`** — no full download needed * Requires only `soundfile` (libsndfile) --- ## 📑 Schema Each Parquet row has: | Column | Type | Description | | --------------- | ----- | -------------------------------------------------------------- | | `id` | str | e.g. `1-a-Sequence-Number/train/1-a-Sequence-Number-01-01.wav` | | `category` | str | `"basic"` or `"advanced"` | | `dataset` | str | group name (e.g. `1-a-Sequence-Number`) | | `split` | str | `train` or `test` | | `template_idx` | str | template index if available | | `item_idx` | str | item index if available | | `text` | str | reference transcription / prompt | | `alignment` | str | alignment metadata | | `audio_bytes` | bytes | raw WAV file bytes | | `audio_format` | str | `"wav"` | | `sampling_rate` | int | e.g., `16000` | --- ## 📚 Citation If you use this dataset, please cite: ``` @article{chang2025gametime, title = {Game-Time: Evaluating Temporal Dynamics in Spoken Language Models}, author = {Kai-Wei Chang and En-Pei Hu and Chun-Yi Kuan and Wenze Ren and Wei-Chih Chen and Guan-Ting Lin and Yu Tsao and Shao-Hua Sun and Hung-yi Lee and James Glass}, year = {2025}, journal = {arXiv preprint arXiv:2509.26388}, url = {https://arxiv.org/abs/2509.26388} } ``` ---