Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
music
License:
bennoweck commited on
Commit
ca7bad0
·
verified ·
1 Parent(s): 2c013f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md CHANGED
@@ -48,4 +48,80 @@ configs:
48
  data_files:
49
  - split: test
50
  path: data/test-*
 
 
 
 
 
 
 
 
 
51
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  data_files:
49
  - split: test
50
  path: data/test-*
51
+ license: cc-by-4.0
52
+ task_categories:
53
+ - question-answering
54
+ language:
55
+ - en
56
+ tags:
57
+ - music
58
+ size_categories:
59
+ - n<1K
60
  ---
61
+
62
+ # HumMusQA: A Human-written Music Understanding QA Benchmark Dataset
63
+
64
+ **Authors:** Benno Weck, Pablo Puentes, Andrea Poltronieri, Satyajeet Prabhu, Dmitry Bogdanov
65
+
66
+ HumMusQA is a multiple-choice question answering dataset designed to test music understanding in Large Audio-Language Models (LALMs).
67
+
68
+ ## Dataset Highlights
69
+
70
+ - ✍️ 320 hand-written multiple-choice questions curated and validated by experts with musical training
71
+ - 🎵 108 Creative Commons-licensed music tracks sourced from Jamendo
72
+ - ⏱️ Music recordings ranging from 30 to 90 seconds
73
+
74
+ ## Loading the data
75
+
76
+ ```python
77
+ from datasets import load_dataset
78
+
79
+ dataset = load_dataset("mtg-upf/HumMusQA", split="test")
80
+ ```
81
+
82
+ ## Licensing
83
+
84
+ - The **dataset annotations** are licensed under **Creative Commons Attribution 4.0 (CC BY 4.0)**.
85
+ - Each **audio track** follows its **own Creative Commons license**, as specified in the dataset metadata.
86
+
87
+ Users are responsible for complying with the license terms of each individual audio track.
88
+
89
+ ## Paper
90
+
91
+ - Paper DOI: https://doi.org/10.18653/v1/2026.nlp4musa-1.9
92
+ - Zenodo DOI: https://doi.org/10.5281/zenodo.18462523
93
+ - arXiv DOI: https://doi.org/10.48550/arXiv.2603.27877
94
+
95
+ ## Citation
96
+
97
+ If you use this dataset, please cite [our paper](https://arxiv.org/abs/2603.27877):
98
+
99
+ > Benno Weck, Pablo Puentes, Andrea Poltronieri, Satyajeet Prabhu, and Dmitry Bogdanov. 2026. HumMusQA: A Human-written Music Understanding QA Benchmark Dataset. In Proceedings of the 4th Workshop on NLP for Music and Audio (NLP4MusA 2026), pages 58–67, Rabat, Morocco. Association for Computational Linguistics.
100
+
101
+ ### BibTeX
102
+ ```bibtex
103
+ @inproceedings{weck-etal-2026-hummusqa,
104
+ title = "{H}um{M}us{QA}: A Human-written Music Understanding {QA} Benchmark Dataset",
105
+ author = "Weck, Benno and
106
+ Puentes, Pablo and
107
+ Poltronieri, Andrea and
108
+ Prabhu, Satyajeet and
109
+ Bogdanov, Dmitry",
110
+ editor = "Epure, Elena V. and
111
+ Oramas, Sergio and
112
+ Doh, SeungHeon and
113
+ Ramoneda, Pedro and
114
+ Kruspe, Anna and
115
+ Sordo, Mohamed",
116
+ booktitle = "Proceedings of the 4th Workshop on {NLP} for Music and Audio ({NLP}4{M}us{A} 2026)",
117
+ month = mar,
118
+ year = "2026",
119
+ address = "Rabat, Morocco",
120
+ publisher = "Association for Computational Linguistics",
121
+ url = "https://aclanthology.org/2026.nlp4musa-1.9/",
122
+ doi = "10.18653/v1/2026.nlp4musa-1.9",
123
+ pages = "58--67",
124
+ ISBN = "979-8-89176-369-2",
125
+ abstract = "The evaluation of music understanding in Large Audio-Language Models (LALMs) requires a rigorously defined benchmark that truly tests whether models can perceive and interpret music, a standard that current data methodologies frequently fail to meet.This paper introduces a meticulously structured approach to music evaluation, proposing a new dataset of 320 hand-written questions curated and validated by experts with musical training, arguing that such focused, manual curation is superior for probing complex audio comprehension.To demonstrate the use of the dataset, we benchmark six state-of-the-art LALMs and additionally test their robustness to uni-modal shortcuts."
126
+ }
127
+ ```