license: apache-2.0
dataset_info:
features:
- name: id
dtype: int32
- name: filename
dtype: string
- name: x
dtype:
audio:
sampling_rate: 16000
- name: instrument
dtype:
class_label:
names:
'0': flute
'1': saxophone
'2': trumpet
'3': violin
- name: tonic
dtype:
class_label:
names:
'0': A
'1': A#
'2': B
'3': C
'4': C#
'5': D
'6': D#
'7': E
'8': F
'9': F#
'10': G
'11': G#
- name: octave
dtype:
class_label:
names:
'0': '4'
- name: scale
dtype:
class_label:
names:
'0': blues
'1': major
'2': minor
- name: rhythm_bar1
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
'10': '10'
'11': '11'
'12': '12'
'13': '13'
'14': '14'
'15': '15'
'16': '16'
'17': '17'
'18': '18'
'19': '19'
'20': '20'
'21': '21'
'22': '22'
'23': '23'
'24': '24'
'25': '25'
'26': '26'
'27': '27'
- name: arp_chord1
dtype:
class_label:
names:
'0': down
'1': up
- name: arp_chord2
dtype:
class_label:
names:
'0': down
'1': up
splits:
- name: test
num_bytes: 227141023
num_examples: 2420
- name: train
num_bytes: 1060220348.535
num_examples: 11289
- name: val
num_bytes: 226830467.066
num_examples: 2419
download_size: 1422650313
dataset_size: 1514191838.6009998
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: val
path: data/val-*
MSD dMelodies-WAV Dataset Attribution
The Multi-factor Sequential Disentanglement benchmark includes the dMelodies-WAV dataset, a synthetically generated collection of 48,000-sample audio waveforms labeled with one static factor (instrument) and five dynamic musical attributes. Extending the symbolic dMelodies benchmark into the raw audio domain, dMelodies-WAV was created by synthesizing a subset of dMelodies using the MIDI-DDSP neural audio synthesis model, producing realistic audio across four instruments. This dataset provides both global factors (instrument, tonic, mode) and local factors (rhythm, arpeggiation), making it a strong disentanglement benchmark for studying global-local and hierarchical factor models in raw waveform music.
Original repositories:
Reference papers:
A. Pati, S. Gururani, A. Lerch.
dMelodies: A Music Dataset for Disentanglement Learning, ISMIR 2020.
https://arxiv.org/abs/2007.15067@inproceedings{pati2020dmelodies, title={dMelodies: A Music Dataset for Disentanglement Learning}, author={Pati, Ashis and Gururani, Siddharth and Lerch, Alexander}, booktitle={21st International Society for Music Information Retrieval Conference (ISMIR)}, year={2020}, address={Montréal, Canada} }Y. Wu, E. Manilow, Y. Deng, R. Swavely, K. Kastner, T. Cooijmans, A. Courville, C. Auang, J, Engel MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling, ISMIR 2021.
https://arxiv.org/abs/2112.09312@inproceedings{ wu2022mididdsp, title={{MIDI}-{DDSP}: Detailed Control of Musical Performance via Hierarchical Modeling}, author={Yusong Wu and Ethan Manilow and Yi Deng and Rigel Swavely and Kyle Kastner and Tim Cooijmans and Aaron Courville and Cheng-Zhi Anna Huang and Jesse Engel}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=UseMOjWENv} }
⚠ Note: The dMelodies-WAV dataset is provided for non-commercial research purposes. Please cite the above when using this dataset in your work.