Sangeetkar Mood Dataset
Overview
The Sangeetkar Mood Dataset is a unified collection of audio tracks designed for Music Emotion Recognition (MER) and generative AI tasks (like Music-Flamingo). It combines several major audio repositories into a single, standardized format: 16kHz sampling rate, mono/stereo audio, and consistent metadata.
Dataset Distribution
The dataset consists of approximately 5,000+ tracks from the following sources:
- Navrasa: Deeply nested Indian music across genres like Bollywood, Punjabi Pop, and Desi Hip-Hop.
- GTZAN (10s & 15s): Standard genre-classification tracks split into shorter segments for efficient training.
- Music-Flamingo: High-quality tracks curated for multimodal AI training.
How to Use
1. Installation
To load and process this dataset, you need the datasets and librosa (or soundfile) libraries.
pip install datasets librosa
2. Loading the Dataset
Since the audio is embedded in the Parquet files, you can load it in two ways.
A. Streaming Mode (Recommended for Large Datasets)
This is the most efficient way. It doesn't download the whole 10GB+ dataset to your disk; it fetches tracks one by one as you iterate.
from datasets import load_dataset
# Load the dataset in streaming mode
ds = load_dataset("beastLucifer/sangeetkar-mood-dataset", split="train", streaming=True)
# Fetch the first sample
sample = next(iter(ds))
print(f"Title: {sample['song_title']}")
print(f"Source: {sample['source']}")
B. Standard Mode (Full Download)
Use this if you have enough disk space and want to perform multiple passes over the data quickly.
from datasets import load_dataset
ds = load_dataset("beastLucifer/sangeetkar-mood-dataset", split="train")
print(f"Total Tracks: {len(ds)}")
3. Playing Music from the Dataset
You can play the audio directly in a Jupyter or Colab notebook using the IPython.display module.
import IPython.display as ipd
# Select a sample (if non-streaming)
sample = ds[0]
# The 'audio' column contains a dictionary with 'array' and 'sampling_rate'
audio_array = sample['audio']['array']
sampling_rate = sample['audio']['sampling_rate']
print(f"Now Playing: {sample['song_title']} ({sample['source']})")
ipd.Audio(audio_array, rate=sampling_rate)
4. Exporting to WAV for NVIDIA Music-Flamingo
If your training script requires a local folder of .wav files and a .jsonl manifest (like the NVIDIA Music-Flamingo repository), use this snippet:
import json
import soundfile as sf
from pathlib import Path
# Create directory
output_dir = Path("nvidia_data/audio")
output_dir.mkdir(parents=True, exist_ok=True)
manifest = []
# Process first 100 samples as an example
for i, item in enumerate(ds.take(100)):
filename = f"track_{i:04d}.wav"
filepath = output_dir / filename
# Save audio
sf.write(filepath, item['audio']['array'], item['audio']['sampling_rate'])
# Append to manifest
manifest.append({
"audio": str(filepath),
"text": f"A music track titled {sample['song_title']}",
"source": item['source']
})
# Save manifest.jsonl
with open("nvidia_data/manifest.jsonl", "w") as f:
for entry in manifest:
f.write(json.dumps(entry) + "\n")
Dataset Schema
| Column | Type | Description |
|---|---|---|
audio |
Audio |
Contains array (waveform) and sampling_rate (16000Hz). |
song_title |
String |
Formatted as "Genre/Artist - Title". |
source |
String |
The origin of the track (e.g., navrasa, gtzan-10s). |
unique_id |
String |
A randomized ID to prevent collisions during training. |
- Downloads last month
- 14