WavBench sgshdgdhsdg commited on
Commit
8d510c4
·
1 Parent(s): e3d142e

Upload 3 files (#4)

Browse files

- Upload 3 files (69c7ce71f96ed409655f14ba3c742275d548918f)


Co-authored-by: liyangzhuo <sgshdgdhsdg@users.noreply.huggingface.co>

README .md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WavBench: Benchmarking Reasoning, Colloquialism, and Paralinguistics for End-to-End Spoken Dialogue Models
2
+
3
+ [**📖 Paper**](https://arxiv.org/abs/2602.12135) | [**🏠 Website**](https://naruto-2024.github.io/wavbench.github.io/)
4
+
5
+ ## Overview of WavBench
6
+
7
+ <div align="center">
8
+ <img src="assets/colloquial_expression.png" width="80%"/>
9
+ <br>
10
+ <em>Figure 1: Examples of Colloquial Expression in WavBench, covering diverse cognitive domains across Basic and Pro subsets.</em>
11
+ </div>
12
+ <br>
13
+
14
+ <div align="center">
15
+ <img src="assets/acoustic_interaction.png" width="80%"/>
16
+ <br>
17
+ <em>Figure 2: Examples of Acoustic Interaction in WavBench, demonstrating Explicit Understanding, Explicit Generation, and Implicit Dialogue.</em>
18
+ </div>
19
+
20
+ ## News
21
+ * **`2026.02.11`** Released the **WavBench** paper, code, and dataset.
22
+ * **`2026.02.11`** Released the leaderboard evaluating 5 state-of-the-art E2E spoken dialogue models.
23
+
24
+ ## Table of Contents
25
+ - [**Leaderboard**](#leaderboard)
26
+ - [**Setup**](#setup)
27
+ - [**Dataset**](#dataset)
28
+ - [**Evaluation**](#evaluation)
29
+ - [**Citation**](#citation)
30
+
31
+ ## Leaderboard
32
+
33
+ Below is the overall evaluation of WavBench across five panels: **Colloquial Expression** (Pro & Basic) and **Acoustic Interaction** (Explicit Understanding, Explicit Generation, and Implicit).
34
+
35
+ | Metrics / Tasks | Qwen3-Omni | Kimi-Audio | Mimo-Audio | Step-Audio-2 | GPT-4o Audio |
36
+ |:---|:---:|:---:|:---:|:---:|:---:|
37
+ | **Panel A: Colloquial (Pro)** | | | | | |
38
+ | Code | 39.75 | 30.29 | 28.96 | 31.20 | **53.60** |
39
+ | Creativity | 48.39 | 31.78 | 42.86 | 35.00 | **63.00** |
40
+ | Instruction | 43.01 | 29.86 | 36.44 | 29.40 | **57.80** |
41
+ | Logic | 33.21 | 26.03 | 27.57 | 26.20 | **42.60** |
42
+ | Math | 38.55 | 27.30 | 25.68 | 22.40 | **50.20** |
43
+ | QA | 50.93 | 42.54 | 41.28 | 40.80 | **72.80** |
44
+ | Safety | 60.00 | 56.19 | 56.19 | 52.40 | **67.60** |
45
+ | **Avg (Pro)** | 39.53 | 30.79 | 32.02 | 30.40 | **58.23** |
46
+ | | | | | | |
47
+ | **Panel B: Colloquial (Basic)** | | | | | |
48
+ | Code | 53.10 | 40.69 | 42.07 | 37.20 | **58.00** |
49
+ | Creativity | 57.44 | 41.57 | 45.29 | 47.20 | **71.20** |
50
+ | Instruction | 57.29 | 44.41 | 33.56 | 36.60 | **66.80** |
51
+ | Logic | 52.35 | 50.74 | 49.91 | 48.80 | **67.00** |
52
+ | Math | 51.05 | 41.27 | 38.73 | 30.20 | **62.40** |
53
+ | QA | 57.54 | 49.07 | 49.12 | 48.60 | **75.60** |
54
+ | Safety | 59.67 | 58.83 | 62.83 | 60.20 | **81.00** |
55
+ | **Avg (Basic)** | 55.80 | 49.23 | 49.57 | 48.50 | **68.80** |
56
+ | | | | | | |
57
+ | **Panel C: Explicit Understanding** | | | | | |
58
+ | Accent | **37.50** | 11.00 | 27.00 | 20.67 | 15.67 |
59
+ | Age | 64.33 | 53.67 | 53.00 | **67.67** | 20.33 |
60
+ | Emotion | **92.86** | 77.33 | 77.33 | 75.43 | 85.90 |
61
+ | Gender | 21.00 | 44.50 | 20.00 | **68.00** | 61.50 |
62
+ | Language | 83.50 | 91.00 | 53.50 | 96.50 | **97.00** |
63
+ | Pitch | 32.44 | 23.11 | 24.00 | **34.22** | 23.56 |
64
+ | Speed | 46.67 | **54.67** | 48.89 | 44.00 | 48.00 |
65
+ | Volume | 33.78 | 38.22 | 31.11 | **50.67** | 41.78 |
66
+ | Audio Event | 61.73 | **67.90** | 19.75 | 39.51 | 59.26 |
67
+ | Music | 22.22 | 66.67 | 55.56 | **77.78** | 33.33 |
68
+ | **Avg (Understand)** | 49.60 | 52.80 | 41.02 | **57.36** | 48.70 |
69
+ | | | | | | |
70
+ | **Panel D: Explicit Generation** | | | | | |
71
+ | Accent | 37.50 | 3.52 | 23.44 | 22.07 | **74.22** |
72
+ | Age | 64.65 | 46.88 | 51.95 | 31.64 | **78.12** |
73
+ | Emotion | 90.04 | 50.29 | 57.13 | 66.50 | **95.51** |
74
+ | Gender | 72.27 | 45.31 | 67.58 | 59.77 | **98.83** |
75
+ | Language | 89.84 | 74.80 | 51.56 | **91.41** | 87.89 |
76
+ | Pitch | 76.56 | 47.27 | 80.27 | 55.66 | **85.74** |
77
+ | Speed | 43.75 | 47.27 | 51.56 | **69.14** | 66.60 |
78
+ | Volume | 56.25 | 64.06 | 59.96 | 57.03 | **82.42** |
79
+ | Audio | 27.03 | 10.81 | 9.46 | 32.43 | **45.95** |
80
+ | Music | 62.50 | 20.83 | 16.67 | **70.83** | 77.08 |
81
+ | **Avg (Generation)** | 62.03 | 41.10 | 46.93 | 55.65 | **79.23** |
82
+ | | | | | | |
83
+ | **Panel E: Implicit** | | | | | |
84
+ | Single-Turn (Text) | 1.85 | 1.84 | 2.23 | 1.12 | **2.43** |
85
+ | Single-Turn (Audio) | 3.17 | 3.21 | 2.47 | **3.50** | 2.96 |
86
+ | Multi-Turn (Text) | **4.88** | 4.57 | 4.61 | 4.38 | 4.48 |
87
+ | Multi-Turn (Audio) | **1.25** | 1.08 | 1.04 | 1.21 | 1.23 |
88
+ | **Avg (Implicit)** | **2.78** | 2.67 | 2.59 | 2.55 | **2.78** |
89
+
90
+ ## Setup
91
+
92
+ ```shell
93
+ conda create -n wavbench python=3.10
94
+ conda activate wavbench
95
+ pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
96
+ pip install -r requirements.txt
97
+
98
+ git clone https://github.com/NARUTO-2024/WavBench.git
99
+ cd WavBench
100
+ ```
101
+
102
+ ## Dataset
103
+
104
+ The data used in this project is available at [WavBench Dataset](https://huggingface.co/datasets/WavBench/WavBench) hosted on Hugging Face.
105
+
106
+ You can load the dataset directly using the Hugging Face `datasets` library:
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ # Load the dataset directly from Hugging Face
112
+ ds = load_dataset("WavBench/WavBench")
113
+ ```
114
+
115
+ Alternatively, you can download the dataset to your local directory and use it directly.
116
+
117
+ ### 1. Colloquial Expression
118
+ This category is divided into **Basic** and **Pro** subsets. Each subset contains tasks across 7 diverse cognitive domains:
119
+
120
+ | Domain | Description |
121
+ | :--- | :--- |
122
+ | **Code** | Evaluate the model's ability to explain code logic conversationally. |
123
+ | **Creative** | Evaluate creative writing without rigid formatting constraints. |
124
+ | **Instruction** | Evaluate adherence to spoken instructions. |
125
+ | **Logic** | Evaluate logical reasoning in a spoken context. |
126
+ | **Math** | Evaluate the verbalization of mathematical reasoning. |
127
+ | **QA** | Evaluate general knowledge answering capabilities. |
128
+ | **Safety** | Evaluate safety mechanisms in spoken interaction. |
129
+
130
+ ### 2. Acoustic Interaction
131
+ This category evaluates the model's paralinguistic capabilities across three dimensions: **Explicit Understanding**, **Explicit Generation**, and **Implicit**.
132
+
133
+ | Category | Sub-tasks / Attributes |
134
+ | :--- | :--- |
135
+ | **Explicit Understanding** | **10 Attributes:** Accent, Age, Emotion, Gender, Language, Pitch, Speed, Volume, Audio, Music. |
136
+ | **Explicit Generation** | **10 Attributes:** Accent, Age, Emotion, Gender, Language, Pitch, Speed, Volume, Audio, Music. |
137
+ | **Implicit** | Single-turn Audio, Single-turn Text, Multi-turn Audio, Multi-turn Text. |
138
+
139
+ ## Evaluation
140
+
141
+
142
+ ### Step 1: Run Inference
143
+ `main.py` is the unified entry point for all dataset types.
144
+
145
+ ```bash
146
+ # Colloquial Inference (Basic) - With audio output
147
+ python main.py --model step_audio2 --data basic_code --audio_output
148
+
149
+ # Colloquial Inference (Pro) - With audio output
150
+ python main.py --model step_audio2 --data pro_math --audio_output
151
+
152
+ # Acoustic Single-turn Inference (With audio output)
153
+ python main.py --model step_audio2 --data acoustic_explicit_generation_emotion --audio_output
154
+
155
+ # Acoustic Multi-round Inference (With audio output)
156
+ python main.py --model step_audio2 --data acoustic_multi_round_generation --audio_output
157
+
158
+ # [Optional] Run with custom data directory
159
+ python main.py --model step_audio2 --data basic_code --data_dir /path/to/your/wavbench
160
+ ```
161
+
162
+ **Supported Arguments:**
163
+ * `--model`: Model name (e.g., `step_audio2`).
164
+ * `--data`: Dataset name (e.g., `basic_code`, `pro_math`, `acoustic_explicit_generation_emotion`).
165
+ * `--data_dir`: Optional. Base directory for WavBench data (Default: ./wavbench). Use this argument if you have downloaded the dataset to a specific location other than the default.
166
+ * `--audio_output`: **Important Flag**. If set, the model generates audio files in addition to text.
167
+ * **Required** for all **Acoustic** tasks (as evaluation relies on audio).
168
+ * **Optional** for **Colloquial** tasks (useful if you want to check the TTS quality manually).
169
+
170
+ ### Step 2: Automatic Evaluation
171
+ `evaluate.py` uses LLMs (Gemini) to judge the responses based on the specific criteria of each subset.
172
+
173
+ ```bash
174
+ # Option 1: Set API key via environment variable
175
+ export GOOGLE_API_KEY="your-api-key"
176
+
177
+ # Evaluate ALL Colloquial datasets
178
+ python evaluate.py --eval_type colloquial --dataset all
179
+
180
+ # Evaluate a SPECIFIC Colloquial dataset
181
+ python evaluate.py --eval_type colloquial --dataset basic_code
182
+
183
+ # Evaluate ALL Acoustic datasets
184
+ python evaluate.py --eval_type acoustic --dataset all
185
+
186
+ # Evaluate a SPECIFIC Acoustic dataset
187
+ python evaluate.py --eval_type acoustic --dataset explicit_generation_emotion
188
+ ```
189
+
190
+ **Supported Arguments:**
191
+ * `--eval_type`: Choose between `colloquial` or `acoustic`.
192
+ * `--dataset`: Specific dataset name (e.g., `basic_code`) or use `all` to run the entire suite.
193
+
194
+ <details>
195
+ <summary><strong>👇 Available Dataset Options (for <code>--data</code> / <code>--dataset</code>)</strong></summary>
196
+
197
+ | Category | Available Values |
198
+ | :--- | :--- |
199
+ | **Basic** | `basic_code`, `basic_creative`, `basic_instruction`, `basic_logic`, `basic_math`, `basic_qa`, `basic_satety` |
200
+ | **Pro** | `pro_code`, `pro_creative`, `pro_instruction`, `pro_logic`, `pro_math`, `pro_qa`, `pro_satety` |
201
+ | **Explicit Generation** | `acoustic_explicit_generation_accent`, `acoustic_explicit_generation_age`, `acoustic_explicit_generation_audio`, `acoustic_explicit_generation_emotion`, `acoustic_explicit_generation_gender`, `acoustic_explicit_generation_lang`, `acoustic_explicit_generation_music`, `acoustic_explicit_generation_pitch`, `acoustic_explicit_generation_speed`, `acoustic_explicit_generation_volume` |
202
+ | **Explicit Understanding** | `acoustic_explicit_understanding_accent`, `acoustic_explicit_understanding_age`, `acoustic_explicit_understanding_audio`, `acoustic_explicit_understanding_emotion`, `acoustic_explicit_understanding_gender`, `acoustic_explicit_understanding_lang`, `acoustic_explicit_understanding_music`, `acoustic_explicit_understanding_pitch`, `acoustic_explicit_understanding_speed`, `acoustic_explicit_understanding_volume` |
203
+ | **Implicit** | `acoustic_implicit_age_generation`, `acoustic_implicit_emotion_generation`, `acoustic_implicit_pitch_generation`, `acoustic_implicit_speed_generation`, `acoustic_implicit_understanding` |
204
+ | **Multi-round** | `acoustic_multi_round_generation`, `acoustic_multi_round_understanding` |
205
+
206
+ </details>
207
+
208
+ ### Step 3: Get Statistics
209
+ `statistics.py` aggregates the evaluation results into a final report.
210
+
211
+ ```bash
212
+ # Basic usage: Output to TXT file
213
+ python statistics.py --eval_dir ./eval_results --output ./statistics.txt
214
+
215
+ # Advanced usage: Output to TXT and CSV format simultaneously
216
+ python statistics.py --eval_dir ./eval_results --output ./statistics.txt --csv
217
+ ```
218
+
219
+ ## Citation
220
+ If you use WavBench in your research, please cite the following paper:
221
+
222
+ ```bibtex
223
+ @misc{li2026wavbenchbenchmarkingreasoningcolloquialism,
224
+ title={WavBench: Benchmarking Reasoning, Colloquialism, and Paralinguistics for End-to-End Spoken Dialogue Models},
225
+ author={Yangzhuo Li and Shengpeng Ji and Yifu Chen and Tianle Liang and Haorong Ying and Yule Wang and Junbo Li and Jun Fang and Zhou Zhao},
226
+ year={2026},
227
+ eprint={2602.12135},
228
+ archivePrefix={arXiv},
229
+ primaryClass={cs.CL},
230
+ url={https://arxiv.org/abs/2602.12135},
231
+ }
232
+ ```
assets/acoustic_interaction.png ADDED

Git LFS Details

  • SHA256: 33533b0ed075693ec22044ed477667b87cc71320575491631198bbfebc186c9d
  • Pointer size: 132 Bytes
  • Size of remote file: 5.32 MB
assets/colloquial_expression.png ADDED

Git LFS Details

  • SHA256: 90fe65cc588316f141b8f24d3431caad54387b9ff2d09f6df63fe141c62e7b93
  • Pointer size: 132 Bytes
  • Size of remote file: 3 MB