said_testset / README.md
Yougen's picture
Update README.md
5d493e7 verified
metadata
license: other
task_categories:
  - audio-classification
pretty_name: SRE Dataset
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test/audio/*.tar
tags:
  - audio
  - speech
  - speaker-recognition
  - sre
  - webdataset

Yougen/said_testset

Speaker Recognition (SRE) speech dataset, packed as WebDataset tar shards.

Layout

data/
  train/
    metadata.csv
    audio/
      train-000.tar
      train-001.tar
      ...
  validation/
    metadata.csv
    audio/
      validation-000.tar
      ...
  test/
    metadata.csv
    audio/
      test-000.tar
      ...

Shard counts:

  • test: 20 tar shard(s)

Inside each tar, every sample is a pair sharing a unique key:

<key>.wav       # raw audio bytes
<key>.json      # {"id":..., "rel_path":..., "wav_format":..., "duration":..., "label_str":..., "label":...}

metadata.csv columns: key, shard, id, rel_path, wav_format, duration, label_str, label

Loading

from datasets import load_dataset

ds = load_dataset("Yougen/said_testset")
print(ds)
print(ds["train"][0])
# sample keys: 'wav' (decoded audio), 'json' (metadata), '__key__', '__url__'

For streaming (no full download needed):

ds = load_dataset("Yougen/said_testset", streaming=True)
for example in ds["train"]:
    print(example["__key__"], example["json"]["label_str"])
    break

HuggingFace's webdataset builder will automatically pair <key>.wav with <key>.json inside every tar and decode the audio.