|
|
--- |
|
|
language: |
|
|
- en |
|
|
dataset_info: |
|
|
features: |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: audio_filename |
|
|
dtype: string |
|
|
- name: metadata |
|
|
dtype: string |
|
|
splits: |
|
|
- name: vggsound_train |
|
|
num_bytes: 1189877449 |
|
|
num_examples: 182615 |
|
|
- name: vggsound_test |
|
|
num_bytes: 99939976 |
|
|
num_examples: 15341 |
|
|
- name: nonspeech7k_train |
|
|
num_bytes: 2451589 |
|
|
num_examples: 6289 |
|
|
- name: nonspeech7k_test |
|
|
num_bytes: 285542 |
|
|
num_examples: 725 |
|
|
- name: vocalsound_train |
|
|
num_bytes: 3646349 |
|
|
num_examples: 15531 |
|
|
- name: vocalsound_test |
|
|
num_bytes: 842929 |
|
|
num_examples: 3591 |
|
|
- name: urbansound8k |
|
|
num_bytes: 3453567 |
|
|
num_examples: 8732 |
|
|
- name: emotion |
|
|
num_bytes: 16432408 |
|
|
num_examples: 19974 |
|
|
- name: age |
|
|
num_bytes: 62968387 |
|
|
num_examples: 48767 |
|
|
- name: gender |
|
|
num_bytes: 60029824 |
|
|
num_examples: 48767 |
|
|
- name: language |
|
|
num_bytes: 93887480 |
|
|
num_examples: 48767 |
|
|
- name: fsd50k |
|
|
num_bytes: 175199243 |
|
|
num_examples: 51197 |
|
|
- name: CochlScene_train |
|
|
num_bytes: 17713725 |
|
|
num_examples: 60855 |
|
|
- name: CochlScene_test |
|
|
num_bytes: 2230052 |
|
|
num_examples: 7687 |
|
|
- name: CochlScene_val |
|
|
num_bytes: 2189339 |
|
|
num_examples: 7573 |
|
|
- name: tau2022 |
|
|
num_bytes: 100770546 |
|
|
num_examples: 230350 |
|
|
- name: esd_emotion |
|
|
num_bytes: 4977344 |
|
|
num_examples: 35000 |
|
|
- name: BirdCLEF_2021_scientific_name |
|
|
num_bytes: 265135140 |
|
|
num_examples: 27740 |
|
|
- name: BirdCLEF_2021_common_name |
|
|
num_bytes: 250890697 |
|
|
num_examples: 27740 |
|
|
- name: audioset |
|
|
num_bytes: 2616069868 |
|
|
num_examples: 312883 |
|
|
- name: emobox |
|
|
num_bytes: 38409103 |
|
|
num_examples: 184821 |
|
|
download_size: 99507577 |
|
|
dataset_size: 5007400557 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: vggsound_train |
|
|
path: data/vggsound_train-* |
|
|
- split: vggsound_test |
|
|
path: data/vggsound_test-* |
|
|
- split: nonspeech7k_train |
|
|
path: data/nonspeech7k_train-* |
|
|
- split: nonspeech7k_test |
|
|
path: data/nonspeech7k_test-* |
|
|
- split: vocalsound_train |
|
|
path: data/vocalsound_train-* |
|
|
- split: vocalsound_test |
|
|
path: data/vocalsound_test-* |
|
|
- split: urbansound8k |
|
|
path: data/urbansound8k-* |
|
|
- split: emotion |
|
|
path: data/emotion-* |
|
|
- split: age |
|
|
path: data/age-* |
|
|
- split: gender |
|
|
path: data/gender-* |
|
|
- split: language |
|
|
path: data/language-* |
|
|
- split: fsd50k |
|
|
path: data/fsd50k-* |
|
|
- split: CochlScene_train |
|
|
path: data/CochlScene_train-* |
|
|
- split: CochlScene_test |
|
|
path: data/CochlScene_test-* |
|
|
- split: CochlScene_val |
|
|
path: data/CochlScene_val-* |
|
|
- split: tau2022 |
|
|
path: data/tau2022-* |
|
|
- split: esd_emotion |
|
|
path: data/esd_emotion-* |
|
|
- split: BirdCLEF_2021_scientific_name |
|
|
path: data/BirdCLEF_2021_scientific_name-* |
|
|
- split: BirdCLEF_2021_common_name |
|
|
path: data/BirdCLEF_2021_common_name-* |
|
|
- split: audioset |
|
|
path: data/audioset-* |
|
|
- split: emobox |
|
|
path: data/emobox-* |
|
|
--- |
|
|
# Zeroshot-Audio-Classification-Instructions |
|
|
|
|
|
Convert audio classification dataset into zero-shot format speech instructions, support both single label and multi-label, |
|
|
|
|
|
1. [VGGSound](https://huggingface.co/datasets/Loie/VGGSound) |
|
|
2. [FSD50k](https://huggingface.co/datasets/Fhrozen/FSD50k) |
|
|
3. [Nonspeech7k](https://huggingface.co/datasets/W4ng1204/Nonspeech7k) |
|
|
4. [urbansound8K](https://huggingface.co/datasets/danavery/urbansound8K) |
|
|
5. [VocalSound](https://huggingface.co/datasets/MahiA/VocalSound) |
|
|
6. [Emotion](https://huggingface.co/datasets/mesolitica/Classification-Speech-Instructions/viewer/default/emotion) |
|
|
7. [Gender](https://huggingface.co/datasets/mesolitica/Classification-Speech-Instructions/viewer/default/gender_age) |
|
|
8. [ESD Emotion](https://github.com/HLTSingapore/Emotional-Speech-Data) |
|
|
9. [Age](https://huggingface.co/datasets/mesolitica/Classification-Speech-Instructions/viewer/default/gender_age) |
|
|
10. [Language](https://huggingface.co/datasets/mesolitica/Classification-Speech-Instructions/viewer/default/gender_age) |
|
|
11. [TAU Urban Acoustic Scenes 2022](https://zenodo.org/records/6337421) |
|
|
12. [CochlScene](https://zenodo.org/records/7080122) |
|
|
13. [BirdCLEF_2021](https://huggingface.co/datasets/mesolitica/Animal-Sound-Instructions/viewer/default/BirdCLEF_2021) |
|
|
14. [EmoBox](https://huggingface.co/datasets/mesolitica/EmoBox) |
|
|
15. [AudioSet](https://huggingface.co/datasets/mesolitica/AudioSet-Audio-Instructions) |
|
|
|
|
|
We also converted huge WAV files into MP3 16k sample rate to reduce storage size. |
|
|
|
|
|
**To prevent leakage, please do not include test set in training session**. |
|
|
|
|
|
## how to prepare the dataset |
|
|
|
|
|
```bash |
|
|
huggingface-cli download \ |
|
|
mesolitica/Zeroshot-Audio-Classification-Instructions \ |
|
|
--include "*.zip" \ |
|
|
--repo-type "dataset" \ |
|
|
--local-dir './' |
|
|
|
|
|
huggingface-cli download \ |
|
|
mesolitica/Audio-Adversarial-Instructions \ |
|
|
--include "*.zip" \ |
|
|
--repo-type "dataset" \ |
|
|
--local-dir './' |
|
|
|
|
|
huggingface-cli download \ |
|
|
mesolitica/Animal-Sound-Instructions \ |
|
|
--include "*.zip" \ |
|
|
--repo-type "dataset" \ |
|
|
--local-dir './' |
|
|
|
|
|
huggingface-cli download \ |
|
|
mesolitica/EmoBox \ |
|
|
--include "*.zip" \ |
|
|
--repo-type "dataset" \ |
|
|
--local-dir './' |
|
|
|
|
|
wget https://gist.githubusercontent.com/huseinzol05/2e26de4f3b29d99e993b349864ab6c10/raw/9b2251f3ff958770215d70c8d82d311f82791b78/unzip.py |
|
|
python3 unzip.py |
|
|
``` |
|
|
|
|
|
## Acknowledgement |
|
|
|
|
|
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |