VCB-Bench / README.md
Rinawell's picture
Create README.md
c0a083c verified

VCB-Bench: An Evaluation Benchmark for Audio-Grounded Large Language Model Conversational Agents

arXiv GitHub Hugging Face

Introduction

Voice Chat Bot Bench (VCB Bench) is a high-quality Chinese benchmark built entirely on real human speech. It evaluates large audio language models (LALMs) along three complementary dimensions:
(1) Instruction following: Text Instruction Following (TIF), Speech Instruction Following (SIF), English Text Instruction Following (TIF-En), English Speech Instruction Following (SIF-En) and Multi-turn Dialog (MTD);
(2) Knowledge: General Knowledge (GK), Mathematical Logic (ML), Discourse Comprehension (DC) and Story Continuation (SC).
(3) Robustness: Speaker Variations (SV), Environmental Variations (EV), and Content Variations (CV).

Getting Started

Installation:

git clone https://github.com/Tencent/VCB-Bench.git
cd VCB-Bench
pip install -r requirements.txt

Note: To evaluate Qwen3-omni, please replace it with the environment it requires.

Download Dataset:

Download the dataset from Hugging Face and place the 'vcb_bench' into 'data/downloaded_datasets'.

Evaluation:

This code is adapted from Kimi-Audio-Evalkit, where you can find more details about the evaluation commands.

(1) Inference + Evaluation:

python run_audio.py --model {model_name} --data {data_name}

For example:

CUDA_VISIBLE_DEVICES=1 python run_audio.py --model Qwen2.5-Omni-7B --data general_knowledge

(2) Only Inference:

python run_audio.py --model {model_name} --data {data_name} --skip-eval

For example:

CUDA_VISIBLE_DEVICES=4,5,6,7 python run_audio.py --model  StepAudio  --data continuation_en  creation_en  empathy_en  recommendation_en  rewriting_en  safety_en  simulation_en emotional_control_en  language_control_en  non_verbal_vocalization_en  pacing_control_en  style_control_en  volume_control_en --skip-eval 

(3) Only Evaluation:

python run_audio.py --model {model_name} --data {data_name} --reeval

For example:

CUDA_VISIBLE_DEVICES=2 nohup python run_audio.py --model  Mimo-Audio --data continuation  creation  empathy --reeval

(4) Inference + ASR + Evaluation:

python run_audio.py --model {model_name} --data {data_name} --wasr

For example:

CUDA_VISIBLE_DEVICES=3 python run_audio.py --model  StepAudio2 --data rewriting  safety  simulation  continuation_en  --wasr 

Format Result:

python sumup_eval.py --model {model_name}
python sumup_eval.py --model {model_name} --export_excel --output_file my_results.xlsx

Supported Datasets and Models

(1) Locate the dataset you need to evaluate from the Data Name column in the Datasets table, and populate the {data_name} parameter in the evaluation command accordingly.
(2) Each dataset in the SV, EV, and CV sections has a corresponding comparison dataset named "{data_name}_cmp", following the specified naming convention.
(3) Identify the model you intend to evaluate from the Model Name column in the Models table, and insert the appropriate {model_name} into the evaluation command.

Datasets:

Data Type Data Name Detail
TIF continuation -
creation -
empathy -
recommendation -
rewriting -
safety -
simulation -
TIF-En continuation_en -
creation_en -
empathy_en -
recommendation_en -
rewriting_en -
safety_en -
simulation_en -
SIF emotional_control -
language_control -
non_verbal_vocalization -
pacing_control -
style_control -
volume_control -
SIF-En emotional_control_en -
language_control_en -
non_verbal_vocalization_en -
pacing_control_en -
style_control_en -
volume_control_en -
MTD progression -
backtracking -
transition -
GK general_knowledge mathematics, geography, politics, chemistry, biology, law, physics, history, medicine, economics, sports, culture
ML basic_math -
math -
logical_reasoning analysis, induction, analogy, logic
DC discourse_comprehension inference, induction, analysis
SV age child, elder
accent tianjin, beijing, dongbei, sichuan
volume down, up
speed -
EV non_vocal_noise echo, outdoors, far_field
vocal_noise TV_playback, background_chat, vocal_music, voice_announcement
unstable_signal -
CV casual_talk -
mispronunciation -
grammatical_error -
topic_shift -
code_switching -

Models:

Model Type Model Name
Chat Model Qwen2-Audio-7B-Instruct
Qwen2.5-Omni-7B
Baichuan-Audio-Chat
GLM4-Voice
Kimi-Audio
Mimo-Audio
StepAudio
StepAudio2
GPT4O-Audio
Qwen3-Omni-Instruct
Pretrain Model Qwen2-Audio-7B
Baichuan-Audio
Kimi-Audio-Base
StepAudio2-Base

Acknowledge

We borrow some code from Kimi-Audio-Evalkit, GLM-4-Voice, Baichuan-Audio, Kimi-Audio, Mimo-Audio, Step-Audio2, and StepAudio.

Citation

@misc{hu2025vcbbenchevaluationbenchmark,
      title={VCB Bench: An Evaluation Benchmark for Audio-Grounded Large Language Model Conversational Agents}, 
      author={Jiliang Hu and Wenfu Wang and Zuchao Li and Chenxing Li and Yiyang Zhao and Hanzhao Li and Liqiang Zhang and Meng Yu and Dong Yu},
      year={2025},
      eprint={2510.11098},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2510.11098}, 
}