| --- |
| license: other |
| task_categories: |
| - audio-classification |
| pretty_name: PFANN Audio Dataset |
| configs: |
| - config_name: default |
| data_files: |
| - split: test_20211202_query |
| path: "data/test_20211202_query/audio/*.tar" |
| - split: test_20211202_seed |
| path: "data/test_20211202_seed/audio/*.tar" |
| - split: test_20211224_query |
| path: "data/test_20211224_query/audio/*.tar" |
| - split: test_20211224_seed |
| path: "data/test_20211224_seed/audio/*.tar" |
| - split: test_20220316_query |
| path: "data/test_20220316_query/audio/*.tar" |
| - split: test_20220316_seed |
| path: "data/test_20220316_seed/audio/*.tar" |
| tags: |
| - audio |
| - music |
| - audio-fingerprinting |
| - pfann |
| - webdataset |
| --- |
| |
| # Yougen/pfann_testset |
| |
| Pfann-style audio dataset, packed as **WebDataset tar shards**. |
| |
| |
| Inside each tar, every sample is a pair sharing a unique key: |
| ``` |
| <key>.<ext> # raw audio bytes (ext == wav / mp3 / flac / ... as in source) |
| <key>.json # {"audio_id":..., "subset":..., "rel_path":..., "duration":..., "sample_rate":..., "channels":...} |
| ``` |
| |
| `metadata.csv` columns: |
| `key, shard, audio_id, subset, rel_path, duration, sample_rate, channels` |
| |
| ## Loading |
| |
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("Yougen/pfann_testset") |
| print(ds) |
| print(ds["test_20211202_query"][0]) |
| # sample keys: 'wav'/'mp3'/... (decoded audio), 'json' (metadata), '__key__', '__url__' |
| ``` |
| |
| For streaming (no full download needed): |
| |
| ```python |
| ds = load_dataset("Yougen/pfann_testset", streaming=True) |
| for example in ds["test_20211202_query"]: |
| print(example["__key__"], example["json"]["audio_id"]) |
| break |
| ``` |
| |
| HuggingFace's `webdataset` builder will automatically pair `<key>.<audio_ext>` |
| with `<key>.json` inside every tar and decode the audio. |
| |