Datasets:
metadata
pretty_name: AudioEval
license: cc-by-nc-4.0
size_categories:
- 1K<n<10K
source_datasets:
- original
annotations_creators:
- expert-generated
- crowdsourced
tags:
- audio
- text-to-audio
- benchmark
- evaluation
configs:
- config_name: default
data_files:
- split: full
path: data/**
drop_labels: true
AudioEval
AudioEval is a text-to-audio evaluation benchmark with 4200 generated clips, 451 prompts, 24 systems, and 25200 per-rater annotations.
This release uses one main clip table in data/metadata.jsonl.
Files
data/metadata.jsonl: one clip-level table for all 4200 clips.data/*.wav: audio files referenced byfile_name.annotations/ratings.csv: anonymized per-rater annotations.annotations/prompts.tsv: prompt metadata.annotations/system_info.csv: system name mapping.stats/*.csv: reliability and model summary tables.
Summary
- 11.712 total hours of audio, about 10.039 seconds per clip on average.
- There are 9 non-expert raters and 3 expert raters.
- Rating rows by rater type: non_expert=12600, expert=12600.
- Each rating row contains 5 integer scores from 1 to 10.
Main Columns
file_name,wav_name,prompt_id,prompt_textscene_category,sound_event_count,audioset_ontologysystem_id,system_namenon_expert_*_mean,expert_*_meannon_expert_*_raw_scores,expert_*_raw_scores
The five evaluation dimensions are production_complexity, content_enjoyment, production_quality, textual_alignment, and content_usefulness.
Loading
Once you have access to the repository on the Hub, you can load the main table like this:
from datasets import load_dataset
data = load_dataset("Hui519/AudioEval", split="full")
print(data[0]["audio"])
print(data[0]["prompt_text"])
- Rater demographic tables are intentionally excluded from this release.
License
This dataset is released under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
Citation
@article{wang2025audioeval,
title={Audioeval: Automatic dual-perspective and multi-dimensional evaluation of text-to-audio-generation},
author={Wang, Hui and Zhao, Jinghua and Cheng, Junyang and Liu, Cheng and Jia, Yuhang and Sun, Haoqin and Zhou, Jiaming and Qin, Yong},
journal={arXiv preprint arXiv:2510.14570},
year={2025}
}