Rinawell commited on
Commit
c0a083c
·
verified ·
1 Parent(s): 39c3387

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +362 -0
README.md ADDED
@@ -0,0 +1,362 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <h1>
3
+ VCB-Bench: An Evaluation Benchmark for Audio-Grounded Large Language Model Conversational Agents
4
+ </h1>
5
+ <a href="https://arxiv.org/abs/2510.11098"><img src="https://img.shields.io/badge/arXiv-2502.17810-B31B1B.svg" alt="arXiv"></a>
6
+ <a href="https://github.com/Tencent/VCB-Bench"><img src="https://img.shields.io/badge/GitHub-Repo-181717.svg" alt="GitHub"></a>
7
+ <a href="https://huggingface.co/datasets/tencent/VCB-Bench"><img src="https://img.shields.io/badge/Hugging%20Face-Data%20Page-yellow" alt="Hugging Face"></a>
8
+
9
+ </div>
10
+
11
+
12
+ ## Introduction
13
+
14
+ <b>Voice Chat Bot Bench (VCB Bench)</b> is a high-quality Chinese benchmark built entirely on real human speech. It evaluates large audio language models (LALMs) along three complementary dimensions:
15
+ <br>
16
+ (1) <b>Instruction following</b>: Text Instruction Following (TIF), Speech Instruction Following (SIF), English Text Instruction Following (TIF-En), English Speech Instruction Following (SIF-En) and Multi-turn Dialog (MTD);<br>
17
+ (2) <b>Knowledge</b>: General Knowledge (GK), Mathematical Logic (ML), Discourse Comprehension (DC) and Story Continuation (SC).<br>
18
+ (3) <b>Robustness</b>: Speaker Variations (SV), Environmental Variations (EV), and Content Variations (CV).
19
+
20
+ ## Getting Started
21
+
22
+ ### Installation:
23
+
24
+ ```bash
25
+ git clone https://github.com/Tencent/VCB-Bench.git
26
+ cd VCB-Bench
27
+ pip install -r requirements.txt
28
+ ```
29
+ Note: To evaluate Qwen3-omni, please replace it with the environment it requires.
30
+
31
+ ### Download Dataset:
32
+ Download the dataset from [Hugging Face](https://huggingface.co/datasets/tencent/VCB-Bench) and place the 'vcb_bench' into 'data/downloaded_datasets'.
33
+
34
+ ### Evaluation:
35
+ This code is adapted from [Kimi-Audio-Evalkit](https://github.com/MoonshotAI/Kimi-Audio-Evalkit), where you can find more details about the evaluation commands.
36
+
37
+ (1) Inference + Evaluation:
38
+ ```
39
+ python run_audio.py --model {model_name} --data {data_name}
40
+ ```
41
+ For example:
42
+ ```
43
+ CUDA_VISIBLE_DEVICES=1 python run_audio.py --model Qwen2.5-Omni-7B --data general_knowledge
44
+ ```
45
+
46
+ (2) Only Inference:
47
+ ```
48
+ python run_audio.py --model {model_name} --data {data_name} --skip-eval
49
+ ```
50
+ For example:
51
+ ```
52
+ CUDA_VISIBLE_DEVICES=4,5,6,7 python run_audio.py --model StepAudio --data continuation_en creation_en empathy_en recommendation_en rewriting_en safety_en simulation_en emotional_control_en language_control_en non_verbal_vocalization_en pacing_control_en style_control_en volume_control_en --skip-eval
53
+ ```
54
+ (3) Only Evaluation:
55
+ ```
56
+ python run_audio.py --model {model_name} --data {data_name} --reeval
57
+ ```
58
+ For example:
59
+ ```
60
+ CUDA_VISIBLE_DEVICES=2 nohup python run_audio.py --model Mimo-Audio --data continuation creation empathy --reeval
61
+ ```
62
+ (4) Inference + ASR + Evaluation:
63
+ ```
64
+ python run_audio.py --model {model_name} --data {data_name} --wasr
65
+ ```
66
+ For example:
67
+ ```
68
+ CUDA_VISIBLE_DEVICES=3 python run_audio.py --model StepAudio2 --data rewriting safety simulation continuation_en --wasr
69
+ ```
70
+
71
+ ### Format Result:
72
+ ```
73
+ python sumup_eval.py --model {model_name}
74
+ ```
75
+ ```
76
+ python sumup_eval.py --model {model_name} --export_excel --output_file my_results.xlsx
77
+ ```
78
+
79
+ ## Supported Datasets and Models
80
+ (1) Locate the dataset you need to evaluate from the Data Name column in the Datasets table, and populate the {data_name} parameter in the evaluation command accordingly.<br>
81
+ (2) Each dataset in the SV, EV, and CV sections has a corresponding comparison dataset named "{data_name}_cmp", following the specified naming convention.<br>
82
+ (3) Identify the model you intend to evaluate from the Model Name column in the Models table, and insert the appropriate {model_name} into the evaluation command.
83
+ ### Datasets:
84
+ <table>
85
+ <thead>
86
+ <tr>
87
+ <th>Data Type</th>
88
+ <th>Data Name</th>
89
+ <th>Detail</th>
90
+ </tr>
91
+ </thead>
92
+ <tbody>
93
+ <tr>
94
+ <td class="category" rowspan="7">TIF</td>
95
+ <td>continuation</td>
96
+ <td>-</td>
97
+ </tr>
98
+ <tr>
99
+ <td>creation</td>
100
+ <td>-</td>
101
+ </tr>
102
+ <tr>
103
+ <td>empathy</td>
104
+ <td>-</td>
105
+ </tr>
106
+ <tr>
107
+ <td>recommendation</td>
108
+ <td>-</td>
109
+ </tr>
110
+ <tr>
111
+ <td>rewriting</td>
112
+ <td>-</td>
113
+ </tr>
114
+ <tr>
115
+ <td>safety</td>
116
+ <td>-</td>
117
+ </tr>
118
+ <tr>
119
+ <td>simulation</td>
120
+ <td>-</td>
121
+ </tr>
122
+ <tr>
123
+ <td class="category" rowspan="7">TIF-En</td>
124
+ <td>continuation_en</td>
125
+ <td>-</td>
126
+ </tr>
127
+ <tr>
128
+ <td>creation_en</td>
129
+ <td>-</td>
130
+ </tr>
131
+ <tr>
132
+ <td>empathy_en</td>
133
+ <td>-</td>
134
+ </tr>
135
+ <tr>
136
+ <td>recommendation_en</td>
137
+ <td>-</td>
138
+ </tr>
139
+ <tr>
140
+ <td>rewriting_en</td>
141
+ <td>-</td>
142
+ </tr>
143
+ <tr>
144
+ <td>safety_en</td>
145
+ <td>-</td>
146
+ </tr>
147
+ <tr>
148
+ <td>simulation_en</td>
149
+ <td>-</td>
150
+ </tr>
151
+ <tr>
152
+ <td class="category" rowspan="6">SIF</td>
153
+ <td>emotional_control</td>
154
+ <td>-</td>
155
+ </tr>
156
+ <tr>
157
+ <td>language_control</td>
158
+ <td>-</td>
159
+ </tr>
160
+ <tr>
161
+ <td>non_verbal_vocalization</td>
162
+ <td>-</td>
163
+ </tr>
164
+ <tr>
165
+ <td>pacing_control</td>
166
+ <td>-</td>
167
+ </tr>
168
+ <tr>
169
+ <td>style_control</td>
170
+ <td>-</td>
171
+ </tr>
172
+ <tr>
173
+ <td>volume_control</td>
174
+ <td>-</td>
175
+ </tr>
176
+ <tr>
177
+ <td class="category" rowspan="6">SIF-En</td>
178
+ <td>emotional_control_en</td>
179
+ <td>-</td>
180
+ </tr>
181
+ <tr>
182
+ <td>language_control_en</td>
183
+ <td>-</td>
184
+ </tr>
185
+ <tr>
186
+ <td>non_verbal_vocalization_en</td>
187
+ <td>-</td>
188
+ </tr>
189
+ <tr>
190
+ <td>pacing_control_en</td>
191
+ <td>-</td>
192
+ </tr>
193
+ <tr>
194
+ <td>style_control_en</td>
195
+ <td>-</td>
196
+ </tr>
197
+ <tr>
198
+ <td>volume_control_en</td>
199
+ <td>-</td>
200
+ </tr>
201
+ <tr>
202
+ <td class="category" rowspan="3">MTD</td>
203
+ <td>progression</td>
204
+ <td>-</td>
205
+ </tr>
206
+ <tr>
207
+ <td>backtracking</td>
208
+ <td>-</td>
209
+ </tr>
210
+ <tr>
211
+ <td>transition</td>
212
+ <td>-</td>
213
+ </tr>
214
+ <tr>
215
+ <td class="category" rowspan="1">GK</td>
216
+ <td>general_knowledge</td>
217
+ <td>mathematics, geography, politics, chemistry, biology, law, physics, history, medicine, economics, sports, culture</td>
218
+ </tr>
219
+ <tr>
220
+ <td class="category" rowspan="3">ML</td>
221
+ <td>basic_math</td>
222
+ <td>-</td>
223
+ </tr>
224
+ <tr>
225
+ <td>math</td>
226
+ <td>-</td>
227
+ </tr>
228
+ <tr>
229
+ <td>logical_reasoning</td>
230
+ <td>analysis, induction, analogy, logic</td>
231
+ </tr>
232
+ <tr>
233
+ <td class="category" rowspan="1">DC</td>
234
+ <td>discourse_comprehension</td>
235
+ <td>inference, induction, analysis</td>
236
+ </tr>
237
+ <tr>
238
+ <td class="category" rowspan="4">SV</td>
239
+ <td>age</td>
240
+ <td>child, elder</td>
241
+ </tr>
242
+ <tr>
243
+ <td>accent</td>
244
+ <td>tianjin, beijing, dongbei, sichuan</td>
245
+ </tr>
246
+ <tr>
247
+ <td>volume</td>
248
+ <td>down, up</td>
249
+ </tr>
250
+ <tr>
251
+ <td>speed</td>
252
+ <td>-</td>
253
+ </tr>
254
+ <tr>
255
+ <td class="category" rowspan="3">EV</td>
256
+ <td>non_vocal_noise</td>
257
+ <td>echo, outdoors, far_field</td>
258
+ </tr>
259
+ <tr>
260
+ <td>vocal_noise</td>
261
+ <td>TV_playback, background_chat, vocal_music, voice_announcement</td>
262
+ </tr>
263
+ <tr>
264
+ <td>unstable_signal</td>
265
+ <td>-</td>
266
+ </tr>
267
+ <tr>
268
+ <td class="category" rowspan="5">CV</td>
269
+ <td>casual_talk</td>
270
+ <td>-</td>
271
+ </tr>
272
+ <tr>
273
+ <td>mispronunciation</td>
274
+ <td>-</td>
275
+ </tr>
276
+ <tr>
277
+ <td>grammatical_error</td>
278
+ <td>-</td>
279
+ </tr>
280
+ <tr>
281
+ <td>topic_shift</td>
282
+ <td>-</td>
283
+ </tr>
284
+ <tr>
285
+ <td>code_switching</td>
286
+ <td>-</td>
287
+ </tr>
288
+ </tbody>
289
+ </table>
290
+
291
+ ### Models:
292
+
293
+ <table>
294
+ <thead>
295
+ <tr>
296
+ <th>Model Type</th>
297
+ <th>Model Name</th>
298
+ </tr>
299
+ </thead>
300
+ <tbody>
301
+ <tr>
302
+ <td class="model-type" rowspan="10">Chat Model</td>
303
+ <td>Qwen2-Audio-7B-Instruct</td>
304
+ </tr>
305
+ <tr>
306
+ <td>Qwen2.5-Omni-7B</td>
307
+ </tr>
308
+ <tr>
309
+ <td>Baichuan-Audio-Chat</td>
310
+ </tr>
311
+ <tr>
312
+ <td>GLM4-Voice</td>
313
+ </tr>
314
+ <tr>
315
+ <td>Kimi-Audio</td>
316
+ </tr>
317
+ <tr>
318
+ <td>Mimo-Audio</td>
319
+ </tr>
320
+ <tr>
321
+ <td>StepAudio</td>
322
+ </tr>
323
+ <tr>
324
+ <td>StepAudio2</td>
325
+ </tr>
326
+ <tr>
327
+ <td>GPT4O-Audio</td>
328
+ </tr>
329
+ <tr>
330
+ <td>Qwen3-Omni-Instruct</td>
331
+ </tr>
332
+ <tr>
333
+ <td class="model-type" rowspan="4">Pretrain Model</td>
334
+ <td>Qwen2-Audio-7B</td>
335
+ </tr>
336
+ <tr>
337
+ <td>Baichuan-Audio</td>
338
+ </tr>
339
+ <tr>
340
+ <td>Kimi-Audio-Base</td>
341
+ </tr>
342
+ <tr>
343
+ <td>StepAudio2-Base</td>
344
+ </tr>
345
+ </tbody>
346
+ </table>
347
+
348
+ ## Acknowledge
349
+ We borrow some code from [Kimi-Audio-Evalkit](https://github.com/MoonshotAI/Kimi-Audio-Evalkit), [GLM-4-Voice](https://github.com/zai-org/GLM-4-Voice), [Baichuan-Audio](https://github.com/baichuan-inc/Baichuan-Audio), [Kimi-Audio](https://github.com/MoonshotAI/Kimi-Audio), [Mimo-Audio](https://github.com/XiaomiMiMo/MiMo-Audio), [Step-Audio2](https://github.com/stepfun-ai/Step-Audio2), and [StepAudio](https://github.com/stepfun-ai/Step-Audio).
350
+
351
+ ## Citation
352
+ ```
353
+ @misc{hu2025vcbbenchevaluationbenchmark,
354
+ title={VCB Bench: An Evaluation Benchmark for Audio-Grounded Large Language Model Conversational Agents},
355
+ author={Jiliang Hu and Wenfu Wang and Zuchao Li and Chenxing Li and Yiyang Zhao and Hanzhao Li and Liqiang Zhang and Meng Yu and Dong Yu},
356
+ year={2025},
357
+ eprint={2510.11098},
358
+ archivePrefix={arXiv},
359
+ primaryClass={cs.SD},
360
+ url={https://arxiv.org/abs/2510.11098},
361
+ }
362
+ ```