Datasets:
metadata
license: other
task_categories:
- audio-classification
pretty_name: ASC Dataset
configs:
- config_name: default
data_files:
- split: test_a1
path: data/test_a1/audio/*.tar
- split: test_a2
path: data/test_a2/audio/*.tar
- split: test_a3
path: data/test_a3/audio/*.tar
- split: test_a4
path: data/test_a4/audio/*.tar
- split: test_a5
path: data/test_a5/audio/*.tar
- split: test_p1
path: data/test_p1/audio/*.tar
- split: test_p2
path: data/test_p2/audio/*.tar
tags:
- audio
- speech
- audio-scene-classification
- asc
- webdataset
Yougen/asc_testset
Audio Scene Classification (ASC) speech dataset, packed as WebDataset tar shards.
Layout
data/
train/
metadata.csv
audio/
train-000.tar
train-001.tar
...
validation/
metadata.csv
audio/
validation-000.tar
...
test/
metadata.csv
audio/
test-000.tar
...
Shard counts:
test_a1: 8 tar shard(s)test_a2: 16 tar shard(s)test_a3: 13 tar shard(s)test_a4: 15 tar shard(s)test_a5: 8 tar shard(s)test_p1: 53 tar shard(s)test_p2: 360 tar shard(s)
Inside each tar, every sample is a pair sharing a unique key:
<key>.wav # raw audio bytes
<key>.json # {"id":..., "rel_path":..., "wav_format":..., "duration":..., "label_str":..., "label":...}
metadata.csv columns:
key, shard, id, rel_path, wav_format, duration, label_str, label
Loading
from datasets import load_dataset
ds = load_dataset("Yougen/asc_testset")
print(ds)
print(ds["train"][0])
# sample keys: 'wav' (decoded audio), 'json' (metadata), '__key__', '__url__'
For streaming (no full download needed):
ds = load_dataset("Yougen/asc_testset", streaming=True)
for example in ds["train"]:
print(example["__key__"], example["json"]["label_str"])
break
HuggingFace's webdataset builder will automatically pair <key>.wav with
<key>.json inside every tar and decode the audio.