Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -50,16 +50,29 @@ configs:
|
|
| 50 |
# MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark
|
| 51 |
|
| 52 |
|
| 53 |
-
[](https://arxiv.org/pdf/2506.04779)
|
|
|
|
| 54 |
|
| 55 |

|
| 56 |
|
| 57 |
## Overview of MMSU:
|
| 58 |
MMSU (Massive Multi-task Spoken Language Understanding and Reasoning Benchmark) is a comprehensive benchmark for evaluating fine-grained spoken language understanding and reasoning in multimodal models.
|
| 59 |
-
|
|
|
|
| 60 |
|
| 61 |

|
| 62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
## Citation
|
| 64 |
```
|
| 65 |
@article{wang2025mmsu,
|
|
|
|
| 50 |
# MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark
|
| 51 |
|
| 52 |
|
| 53 |
+
[](https://arxiv.org/pdf/2506.04779) [](https://github.com/dingdongwang/MMSU_Bench)
|
| 54 |
+
|
| 55 |
|
| 56 |

|
| 57 |
|
| 58 |
## Overview of MMSU:
|
| 59 |
MMSU (Massive Multi-task Spoken Language Understanding and Reasoning Benchmark) is a comprehensive benchmark for evaluating fine-grained spoken language understanding and reasoning in multimodal models.
|
| 60 |
+
|
| 61 |
+
The benchmark covers **47 sub-tasks** spanning perception and reasoning across phonetics, prosody, semantics, sociolinguistics, and rhetoric. It contains **5,000 carefully constructed audio-question-answer pairs**, with speech collected from diverse authentic recordings.
|
| 62 |
|
| 63 |

|
| 64 |
|
| 65 |
+
## Usage
|
| 66 |
+
You can load the dataset via Hugging Face datasets:
|
| 67 |
+
|
| 68 |
+
```
|
| 69 |
+
from datasets import load_dataset
|
| 70 |
+
ds = load_dataset("ddwang2000/MMSU")
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
For evaluation, please refer to [**GitHub Code**](https://github.com/dingdongwang/MMSU_Bench)
|
| 74 |
+
|
| 75 |
+
|
| 76 |
## Citation
|
| 77 |
```
|
| 78 |
@article{wang2025mmsu,
|