Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- audio-classification
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
- ko
|
| 8 |
+
- pt
|
| 9 |
+
tags:
|
| 10 |
+
- non-speech
|
| 11 |
+
- vocal-sounds
|
| 12 |
+
- emotion
|
| 13 |
+
- human-voice
|
| 14 |
+
- webdataset
|
| 15 |
+
size_categories:
|
| 16 |
+
- 10K<n<100K
|
| 17 |
+
configs:
|
| 18 |
+
- config_name: NonSpeech7k
|
| 19 |
+
data_files:
|
| 20 |
+
- split: train
|
| 21 |
+
path: NonSpeech7k/train/*.tar
|
| 22 |
+
- split: test
|
| 23 |
+
path: NonSpeech7k/test/*.tar
|
| 24 |
+
- config_name: VocalSound
|
| 25 |
+
data_files:
|
| 26 |
+
- split: train
|
| 27 |
+
path: VocalSound/*.tar
|
| 28 |
+
- config_name: DeeplyNonverbalVocalization
|
| 29 |
+
data_files:
|
| 30 |
+
- split: train
|
| 31 |
+
path: DeeplyNonverbalVocalization/*.tar
|
| 32 |
+
- config_name: EmoGator
|
| 33 |
+
data_files:
|
| 34 |
+
- split: train
|
| 35 |
+
path: EmoGator/*.tar
|
| 36 |
+
- config_name: Expresso
|
| 37 |
+
data_files:
|
| 38 |
+
- split: train
|
| 39 |
+
path: Expresso/*.tar
|
| 40 |
+
- config_name: VIVAE
|
| 41 |
+
data_files:
|
| 42 |
+
- split: train
|
| 43 |
+
path: VIVAE/*.tar
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
# Speech Utterances Dataset Collection
|
| 47 |
+
|
| 48 |
+
A collection of human non-speech vocal sound datasets in WebDataset format, useful for audio classification tasks involving vocal expressions, emotions, and non-verbal sounds.
|
| 49 |
+
|
| 50 |
+
## Subsets
|
| 51 |
+
|
| 52 |
+
### NonSpeech7k
|
| 53 |
+
- **Samples**: 7,014 (train: 6,289, test: 725)
|
| 54 |
+
- **Classes**: Breathing, Coughing, Crying, Laughing, Screaming, Sneezing, Yawning
|
| 55 |
+
- **Source**: [Zenodo](https://zenodo.org/records/6967442)
|
| 56 |
+
- **License**: CC BY-NC-SA 4.0
|
| 57 |
+
|
| 58 |
+
### VocalSound
|
| 59 |
+
- **Samples**: 21,024
|
| 60 |
+
- **Classes**: Laughter, Sigh, Cough, Throat clearing, Sneeze, Sniff
|
| 61 |
+
- **Speakers**: 3,365 from 60 countries
|
| 62 |
+
- **Source**: [GitHub](https://github.com/YuanGongND/vocalsound)
|
| 63 |
+
- **License**: CC BY-SA 4.0
|
| 64 |
+
|
| 65 |
+
### DeeplyNonverbalVocalization
|
| 66 |
+
- **Samples**: 726 (5% subset of full dataset)
|
| 67 |
+
- **Classes**: 16 (teeth-chattering, teeth-grinding, tongue-clicking, nose-blowing, coughing, yawning, throat-clearing, sighing, lip-popping, lip-smacking, panting, crying, laughing, sneezing, moaning, screaming)
|
| 68 |
+
- **Source**: [OpenSLR](https://www.openslr.org/99/)
|
| 69 |
+
- **License**: CC BY-NC-ND 4.0
|
| 70 |
+
|
| 71 |
+
### EmoGator
|
| 72 |
+
- **Samples**: 32,130
|
| 73 |
+
- **Classes**: 30 emotion categories (Adoration, Amusement, Anger, Awe, Confusion, Contempt, Contentment, Desire, Disappointment, Disgust, Distress, Ecstasy, Elation, Embarrassment, Fear, Guilt, Interest, Neutral, Pain, Pride, Realization, Relief, Romantic Love, Sadness, Serenity, Shame, Surprise Negative, Surprise Positive, Sympathy, Triumph)
|
| 74 |
+
- **Contributors**: 357
|
| 75 |
+
- **Source**: [GitHub](https://github.com/fredbuhl/EmoGator)
|
| 76 |
+
- **License**: Apache 2.0
|
| 77 |
+
|
| 78 |
+
### Expresso
|
| 79 |
+
- **Samples**: 12,293
|
| 80 |
+
- **Styles**: 26 expressive styles (angry, animal, awe, bored, calm, child, confused, default, desire, disgusted, enunciated, fast, fearful, happy, laughing, narration, non_verbal, projected, sad, sarcastic, singing, sleepy, sympathetic, whisper, etc.)
|
| 81 |
+
- **Speakers**: 4 (2 male, 2 female)
|
| 82 |
+
- **Source**: [Expresso](https://speechbot.github.io/expresso)
|
| 83 |
+
- **License**: CC BY-NC 4.0
|
| 84 |
+
|
| 85 |
+
### VIVAE
|
| 86 |
+
- **Samples**: 1,565
|
| 87 |
+
- **Emotions**: achievement, anger, fear, pain, pleasure, surprise
|
| 88 |
+
- **Intensity levels**: low, moderate, strong, peak
|
| 89 |
+
- **Speakers**: 10
|
| 90 |
+
- **Source**: [Zenodo](https://zenodo.org/record/4066235)
|
| 91 |
+
- **License**: CC BY 4.0
|
| 92 |
+
|
| 93 |
+
## Format
|
| 94 |
+
|
| 95 |
+
All datasets are stored in WebDataset format:
|
| 96 |
+
- **Audio**: FLAC, 48kHz, 16-bit, mono
|
| 97 |
+
- **Metadata**: JSON with "text" key containing labels
|
| 98 |
+
|
| 99 |
+
## Usage
|
| 100 |
+
|
| 101 |
+
```python
|
| 102 |
+
from datasets import load_dataset
|
| 103 |
+
|
| 104 |
+
# Load a specific subset
|
| 105 |
+
ds = load_dataset("gijs/speech-utterances", "NonSpeech7k")
|
| 106 |
+
ds = load_dataset("gijs/speech-utterances", "VocalSound")
|
| 107 |
+
ds = load_dataset("gijs/speech-utterances", "EmoGator")
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
## Citations
|
| 111 |
+
|
| 112 |
+
Please cite the original datasets when using this collection:
|
| 113 |
+
|
| 114 |
+
- **NonSpeech7k**: Rashid et al. (2023). Nonspeech7k dataset. IET Signal Processing.
|
| 115 |
+
- **VocalSound**: Gong et al. (2022). Vocalsound. ICASSP 2022.
|
| 116 |
+
- **DeeplyNonverbalVocalization**: Deeply Inc. Vocal Characterizer Dataset.
|
| 117 |
+
- **EmoGator**: Buhl et al. (2023). arXiv:2301.00508
|
| 118 |
+
- **Expresso**: Nguyen et al. (2023). EXPRESSO. INTERSPEECH 2023.
|
| 119 |
+
- **VIVAE**: Holz et al. (2020). VIVAE corpus. Zenodo.
|