lid_testset / README.md
Yougen's picture
Update README.md
d96a9b8 verified
metadata
license: other
task_categories:
  - audio-classification
pretty_name: LID Dataset
configs:
  - config_name: default
    data_files:
      - split: test1
        path: data/test1/audio/*.tar
      - split: test2
        path: data/test2/audio/*.tar
      - split: test3
        path: data/test3/audio/*.tar
      - split: test4
        path: data/test4/audio/*.tar
      - split: test5
        path: data/test5/audio/*.tar
tags:
  - audio
  - speech
  - language-identification
  - lid
  - webdataset

Yougen/lid_testset

Language Identification (LID) speech dataset, packed as WebDataset tar shards.

Layout

data/
  train/
    metadata.csv
    audio/
      train-000.tar
      train-001.tar
      ...
  validation/
    metadata.csv
    audio/
      validation-000.tar
      ...
  test/
    metadata.csv
    audio/
      test-000.tar
      ...

Shard counts:

  • test1: 40 tar shard(s)
  • test2: 24 tar shard(s)
  • test3: 17 tar shard(s)
  • test4: 24 tar shard(s)
  • test5: 13 tar shard(s)

Inside each tar, every sample is a pair sharing a unique key:

<key>.wav       # raw audio bytes
<key>.json      # {"id":..., "rel_path":..., "wav_format":..., "duration":..., "label_str":..., "label":...}

metadata.csv columns: key, shard, id, rel_path, wav_format, duration, label_str, label

Loading

from datasets import load_dataset

ds = load_dataset("Yougen/lid_testset")
print(ds)
print(ds["train"][0])
# sample keys: 'wav' (decoded audio), 'json' (metadata), '__key__', '__url__'

For streaming (no full download needed):

ds = load_dataset("Yougen/lid_testset", streaming=True)
for example in ds["train"]:
    print(example["__key__"], example["json"]["label_str"])
    break

HuggingFace's webdataset builder will automatically pair <key>.wav with <key>.json inside every tar and decode the audio.