HumSpeechBlend / README.md
CuriousMonkey7's picture
Update README.md
71f941f verified
---
tags:
- audio
- speech
- vad
- humming
license: cc-by-sa-4.0
task_categories:
- voice-activity-detection
language:
- en
size_categories:
- 1K<n<10K
---
# WORK IN PROGRESS
# [WIP]HumSpeechBlend Dataset: Humming vs Speech Detection
## πŸ“Œ Overview
**HumSpeechBlend** is a dataset designed to fine-tune **Voice Activity Detection (VAD) models** to distinguish between **humming** and actual speech. Current VAD models often misclassify humming as speech, leading to incorrect segmentation in speech processing tasks. This dataset provides a structured collection of humming audio interspersed with speech to help improve model accuracy.
<!--
## 🎯 Purpose
The dataset was created to address the challenge where **humming is mistakenly detected as speech** by existing VAD models. By fine-tuning a VAD model with this dataset, we aim to:
- Improve **humming detection** accuracy.
- Ensure **clear differentiation between humming and speech**.
- Enhance **real-world speech activity detection**. -->
## πŸ“œ Dataset Creation Strategy
To build this dataset, the following methodology was used:
1. **Humming Audio Collection**: Various humming recordings were sourced from the ["MLEnd-Hums-and-Whistles"](https://www.kaggle.com/datasets/jesusrequena/mlend-hums-and-whistles?resource=download) dataset.
2. **Speech Insertion**: Short speech segments were extracted from ["Global Recordings Network"](https://models.silero.ai/vad_datasets/globalrecordings.feather) datasets.
3. **Mixing Strategy**:
- Speech can be **longer or shorter than the humming segment**.
- Speech is **randomly inserted** at different timestamps in the humming audio.
- Speech timestamps were carefully annotated to facilitate supervised learning.
### πŸ”Ή Metadata Explanation
The dataset includes the following metadata columns:
| Column Name | Description |
|----------------------------------|-------------|
| `file_name` | The path to the final mixed audio file (humming + speech). |
| `speech_ts` | The timestamps where speech appears within the mixed audio file. |
| `humming_song` | The song or source from which the humming was derived. |
| `humming_Interpreter` | The individual or source providing the humming. More info in [`MLEndHWD_Interpreter_Demographics.csv`](https://huggingface.co/datasets/CuriousMonkey7/HumSpeechBlend/blob/main/MLEndHWD_Interpreter_Demographics.csv) |
| `humming_audio_used` | humming audio path in the original dataset. |
| `humming_transcript` | Transcription of the humming from whisper-large-v3-turbo. |
| `globalrecordings_audio_used` | Speech segment sourced from Global Recordings Network. |
| `globalrecordings_audio_ts_used` | The start and end timestamps of the speech segment in the original recording. |
## πŸ“₯ Download and Usage
### πŸ› οΈ Loading the Dataset
Since the dataset does not have predefined splits, you can load it using the following code:
```python
import pandas as pd
from datasets import load_dataset
# Load dataset from Hugging Face
dataset = load_dataset("CuriousMonkey7/HumSpeechBlend", split=None) # No predefined splits
# Load metadata
metadata = pd.read_feather("metadata.feather") # Load the Feather metadata
print(metadata.head())
```
### πŸ”Š Loading Audio Files
To work with the audio files:
```python
import torchaudio
waveform, sample_rate = torchaudio.load("data/audio1.wav")
print(f"Sample Rate: {sample_rate}, Waveform Shape: {waveform.shape}")
```
## πŸ“„ Citation
If you use this dataset, please cite it accordingly.
```
@dataset{HumSpeechBlend,
author = {Sourabh Saini},
title = {HumSpeechBlend: Humming vs Speech Dataset},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/CuriousMonkey7/HumSpeechBlend}
}
```