Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,088 Bytes
fb10fac
b8516c2
fb10fac
 
 
08bab6b
 
 
fb10fac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
08bab6b
fb10fac
08bab6b
 
fb10fac
 
42ff7a6
 
 
fb10fac
42ff7a6
fb10fac
 
 
 
42ff7a6
 
 
 
fb10fac
 
 
42ff7a6
fb10fac
 
 
 
 
 
 
 
42ff7a6
fb10fac
 
cd90e8e
aada85d
3828442
 
895419a
fb10fac
42ff7a6
 
 
 
 
 
 
 
 
 
fb10fac
 
 
 
 
 
 
 
 
 
 
 
 
 
42ff7a6
fb10fac
 
 
1dcb74b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: cc-by-nc-4.0
---
# MMAU-Pro: A Challenging and Comprehensive Benchmark for Audio General Intelligence

[![Paper](https://img.shields.io/badge/arxiv-%20PDF-red)](https://www.arxiv.org/pdf/2508.13992) [![Audios](https://img.shields.io/badge/🔈%20-Audios-blue)](https://huggingface.co/datasets/gamma-lab-umd/MMAU-Pro/blob/main/data.zip)


[MMAU-Pro](https://arxiv.org/abs/2508.13992) is the most comprehensive benchmark to date for evaluating **audio intelligence in multimodal models**. It spans speech, environmental sounds, music, and their combinations—covering **49 distinct perceptual and reasoning skills**.  

The dataset contains **5,305 expert-annotated question–answer pairs**, with audios sourced directly *from the wild*. It introduces several novel challenges overlooked by prior benchmarks, including:  

- Long-form audio understanding (up to 10 minutes)  
- Multi-audio reasoning  
- Spatial audio perception  
- Multicultural music reasoning  
- Voice-based STEM and world-knowledge QA  
- Instruction-following with verifiable constraints  
- Open-ended QA in addition to MCQs  

---

🚀 Usage

You can load the dataset via Hugging Face datasets:

```
from datasets import load_dataset
ds = load_dataset("gamma-lab-umd/MMAU-Pro")
```

For evaluation, we provide:
- MCQ scoring via embedding similarity (NV-Embed-v2)
- Open-ended QA with LLM-as-a-judge
- Regex based string matching for Instruction Following

---

🧪 Baselines & Model Performance

We benchmarked 22 leading models on MMAU-Pro.
- Gemini 2.5 Flash (closed-source): 59.2% avg. accuracy
- Audio Flamingo 3 (open-source): 51.7%
- Qwen2.5-Omni-7B: 52.2%
- Humans: ~78%

See full results in the paper.

---

🌍 Multicultural Music Coverage

MMAU-Pro includes music from 8 diverse regions:
	•	Western, Chinese, Indian, European, African, Latin American, Middle Eastern, Other Asian

This reveals clear biases: models perform well on Western/Chinese but poorly on Indian/Latin American music.

---

📥 Download

- Dataset: [HF](https://huggingface.co/datasets/gamma-lab-umd/MMAU-Pro)
- Paper: [MMAU-Pro](https://arxiv.org/abs/2508.13992)
- Website: [Official Page](https://sonalkum.github.io/mmau-pro/)
- Github: [Git](https://github.com/sonalkum/MMAUPro)

---

🧩 Evaluation

The evaluation code is designed to take in the complete `test.parquet` with predictions in the column `model_ouput`. 
```
python evaluate_mmau_pro_comprehensive.py test.parquet --model_output_column model_output
```

---

✍️ Citation

If you use MMAU-Pro, please cite:

```bibtex
@article{kumar2025mmau,
  title={MMAU-Pro: A Challenging and Comprehensive Benchmark for Holistic Evaluation of Audio General Intelligence},
  author={Kumar, Sonal and Sedl{\'a}{\v{c}}ek, {\v{S}}imon and Lokegaonkar, Vaibhavi and L{\'o}pez, Fernando and Yu, Wenyi and Anand, Nishit and Ryu, Hyeonggon and Chen, Lichang and Pli{\v{c}}ka, Maxim and Hlav{\'a}{\v{c}}ek, Miroslav and others},
  journal={arXiv preprint arXiv:2508.13992},
  year={2025}
}
```

---

🙏 Acknowledgments

Some work was carried out at JSALT 2025.