File size: 3,852 Bytes
8ecfb56
 
 
 
 
 
12d280b
 
 
 
 
 
 
8ecfb56
71f941f
 
8ecfb56
670d90b
561918e
ed1f8ad
670d90b
 
 
 
ed1f8ad
670d90b
 
 
20c97ff
 
670d90b
20c97ff
670d90b
 
 
20c97ff
670d90b
 
 
 
 
fdabe6d
 
670d90b
5c90d04
20c97ff
 
670d90b
 
fdabe6d
670d90b
 
 
 
 
 
 
 
 
 
 
 
561918e
670d90b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
561918e
670d90b
5c90d04
670d90b
 
561918e
670d90b
12d280b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
tags:
- audio
- speech
- vad
- humming
license: cc-by-sa-4.0
task_categories:
- voice-activity-detection
language:
- en
size_categories:
- 1K<n<10K
---
# WORK IN PROGRESS
# [WIP]HumSpeechBlend Dataset: Humming vs Speech Detection

## πŸ“Œ Overview
**HumSpeechBlend** is a dataset designed to fine-tune **Voice Activity Detection (VAD) models** to distinguish between **humming** and actual speech. Current VAD models often misclassify humming as speech, leading to incorrect segmentation in speech processing tasks. This dataset provides a structured collection of humming audio interspersed with speech to help improve model accuracy.
<!-- 
## 🎯 Purpose
The dataset was created to address the challenge where **humming is mistakenly detected as speech** by existing VAD models. By fine-tuning a VAD model with this dataset, we aim to:
- Improve **humming detection** accuracy.
- Ensure **clear differentiation between humming and speech**.
- Enhance **real-world speech activity detection**. -->

## πŸ“œ Dataset Creation Strategy
To build this dataset, the following methodology was used:
1. **Humming Audio Collection**: Various humming recordings were sourced from the ["MLEnd-Hums-and-Whistles"](https://www.kaggle.com/datasets/jesusrequena/mlend-hums-and-whistles?resource=download) dataset.
2. **Speech Insertion**: Short speech segments were extracted from ["Global Recordings Network"](https://models.silero.ai/vad_datasets/globalrecordings.feather) datasets.
3. **Mixing Strategy**:
   - Speech can be **longer or shorter than the humming segment**.
   - Speech is **randomly inserted** at different timestamps in the humming audio.
   - Speech timestamps were carefully annotated to facilitate supervised learning.


### πŸ”Ή Metadata Explanation
The dataset includes the following metadata columns:

| Column Name                      | Description |
|----------------------------------|-------------|
| `file_name`                     | The path to the final mixed audio file (humming + speech). |
| `speech_ts`                     | The timestamps where speech appears within the mixed audio file. |
| `humming_song`                  | The song or source from which the humming was derived. |
| `humming_Interpreter`           | The individual or source providing the humming. More info in [`MLEndHWD_Interpreter_Demographics.csv`](https://huggingface.co/datasets/CuriousMonkey7/HumSpeechBlend/blob/main/MLEndHWD_Interpreter_Demographics.csv) |
| `humming_audio_used`            | humming audio path in the original dataset. |
| `humming_transcript`            | Transcription of the humming from whisper-large-v3-turbo. |
| `globalrecordings_audio_used`   | Speech segment sourced from Global Recordings Network. |
| `globalrecordings_audio_ts_used` | The start and end timestamps of the speech segment in the original recording. |



## πŸ“₯ Download and Usage

### πŸ› οΈ Loading the Dataset
Since the dataset does not have predefined splits, you can load it using the following code:

```python
import pandas as pd
from datasets import load_dataset

# Load dataset from Hugging Face
dataset = load_dataset("CuriousMonkey7/HumSpeechBlend", split=None)  # No predefined splits

# Load metadata
metadata = pd.read_feather("metadata.feather")  # Load the Feather metadata
print(metadata.head())
```

### πŸ”Š Loading Audio Files
To work with the audio files:
```python
import torchaudio

waveform, sample_rate = torchaudio.load("data/audio1.wav")
print(f"Sample Rate: {sample_rate}, Waveform Shape: {waveform.shape}")
```

## πŸ“„ Citation
If you use this dataset, please cite it accordingly.

```
@dataset{HumSpeechBlend,
  author = {Sourabh Saini},
  title = {HumSpeechBlend: Humming vs Speech Dataset},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/CuriousMonkey7/HumSpeechBlend}
}
```