File size: 2,367 Bytes
813b1b0
 
b577691
813b1b0
 
5582826
 
 
 
 
 
 
 
 
 
813b1b0
 
 
eba5002
5fe45fb
813b1b0
 
 
 
 
5fe45fb
162a8e5
813b1b0
5fe45fb
813b1b0
5fe45fb
 
5582826
5fe45fb
 
 
5582826
5fe45fb
5582826
5fe45fb
e357a6f
 
5fe45fb
5582826
5fe45fb
5582826
162a8e5
5fe45fb
 
e357a6f
 
5582826
5fe45fb
5582826
5fe45fb
5582826
5fe45fb
5582826
5fe45fb
 
5582826
eba5002
5fe45fb
 
 
5582826
5fe45fb
5582826
 
813b1b0
b577691
813b1b0
5582826
813b1b0
5582826
 
162a8e5
5582826
 
162a8e5
5582826
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
pretty_name: AudioEval
license: cc-by-nc-4.0
size_categories:
- 1K<n<10K
source_datasets:
- original
annotations_creators:
- expert-generated
- crowdsourced
tags:
- audio
- text-to-audio
- benchmark
- evaluation
configs:
- config_name: default
  data_files:
  - split: full
    path: data/**
  drop_labels: true
---

# AudioEval

AudioEval is a text-to-audio evaluation benchmark with 4200 generated clips, 451 prompts, 24 systems, and 25200 per-rater annotations.
This release uses one main clip table in `data/metadata.jsonl`.

## Files

- `data/metadata.jsonl`: one clip-level table for all 4200 clips.
- `data/*.wav`: audio files referenced by `file_name`.
- `annotations/ratings.csv`: anonymized per-rater annotations.
- `annotations/prompts.tsv`: prompt metadata.
- `annotations/system_info.csv`: system name mapping.
- `stats/*.csv`: reliability and model summary tables.

## Summary

- 11.712 total hours of audio, about 10.039 seconds per clip on average.
- There are 9 non-expert raters and 3 expert raters.
- Rating rows by rater type: non_expert=12600, expert=12600.
- Each rating row contains 5 integer scores from 1 to 10.

## Main Columns

- `file_name`, `wav_name`, `prompt_id`, `prompt_text`
- `scene_category`, `sound_event_count`, `audioset_ontology`
- `system_id`, `system_name`
- `non_expert_*_mean`, `expert_*_mean`
- `non_expert_*_raw_scores`, `expert_*_raw_scores`

The five evaluation dimensions are `production_complexity`, `content_enjoyment`, `production_quality`, `textual_alignment`, and `content_usefulness`.

## Loading

Once you have access to the repository on the Hub, you can load the main table like this:

```python
from datasets import load_dataset

data = load_dataset("Hui519/AudioEval", split="full")
print(data[0]["audio"])
print(data[0]["prompt_text"])
```

- Rater demographic tables are intentionally excluded from this release.

## License

This dataset is released under Creative Commons Attribution-NonCommercial 4.0 International (`CC BY-NC 4.0`).

## Citation

```bibtex
@article{wang2025audioeval,
  title={Audioeval: Automatic dual-perspective and multi-dimensional evaluation of text-to-audio-generation},
  author={Wang, Hui and Zhao, Jinghua and Cheng, Junyang and Liu, Cheng and Jia, Yuhang and Sun, Haoqin and Zhou, Jiaming and Qin, Yong},
  journal={arXiv preprint arXiv:2510.14570},
  year={2025}
}
```