vocal-v1 / README.md
lamooon's picture
Upload README.md with huggingface_hub
2a138c5 verified
metadata
license: cc-by-nc-sa-4.0
tags:
  - music
  - vocal-transcription
  - melody
  - pitch-detection
pretty_name: Vocal Melody Transcription Dataset v1

Vocal Melody Transcription Dataset v1

Training data for a monophonic vocal melody transcription model. The model uses a ROSVOT-style architecture (MERT encoder → U-Net w/ Conformer bottleneck → onset/pitch/frame heads) and is designed to receive Demucs v4-separated vocal audio at inference time.

Datasets

Source Tracks Description
MIR-ST500 385 Pop songs with manual onset/offset/pitch annotations
DALI ~4,927 Large-scale vocal annotations aligned to audio (10 batch tars)
MedleyDB 107 Multitrack recordings with Melody2 F0→note converted annotations
Total ~5,420 Matched audio+label pairs across all sources

All audio is resampled to 24kHz mono WAV, peak-normalized to -1dB. Labels are unified CSV format: onset_sec, offset_sec, pitch_midi, pitch_hz, source_dataset.

Files on this repo

File Size Contents
vocal_v1.tar ~1.3GB MIR-ST500 processed audio + labels
vocal_v1_dali_batch{1-10}.tar ~4GB each DALI processed audio + labels (10 batches)
vocal_v1_medleydb.tar ~915MB MedleyDB processed audio (24kHz) + note-level labels
MedleyDB_v1.tar ~8.8GB Raw MedleyDB V1: 122 MIX wavs + 116 vocal stems (not used directly in training pipeline)
oneshots.tar ~972MB 1,204 curated vocal oneshots for vocal bleed augmentation
vocal_v1_augmented.tar OBSOLETE (on-the-fly augmentation used instead)

Augmentation (on-the-fly)

Applied during training only (never to val/test):

  • Pitch shift: ±4 semitones (label-aware — adjusts pitch annotations)
  • Time stretch: 0.85x–1.15x
  • Noise injection: SNR 10–40dB
  • Vocal bleed: overlay random oneshots at SNR 25–40dB
  • Downsample-resample: through 16kHz/22.05kHz
  • Random EQ: 3-band, ±3dB gain

Training Pipeline

run.sh (SLURM) auto-downloads all tars (skipping augmented and raw MedleyDB_v1), extracts to data/, rebuilds train/val/test splits stratified by source (seed 42), then trains with on-the-fly augmentation.

Label Format

onset_sec, offset_sec, pitch_midi, pitch_hz, source_dataset
0.52, 0.89, 60.0, 261.63, mir_st500

Pitch targets use bin offset: bin 0 = unvoiced, bin 1 = MIDI 21 (A0), so pitch_bin = midi_note - 21 + 1.