Datasets:
license: cc-by-4.0
language:
- zh
- ja
tags:
- singing
- MOS
size_categories:
- 1B<n<10B
NOTICE[Important!!!!]: We release the official version of SingMOS-Pro.
-------------------- Previous Information -----------------------
paper link: SingMOS-Pro: An Comprehensive Benchmark for Singing Quality Assessment
If you want to use the dataset of the singing track in VoiceMOS 2024, you can visit SingMOS_v1.
If you want to use our pretrained SingMOS model, you can visit our repo at Singing MOS Predictor.
Overview
SingMOS-Pro includes 7,981 Chinese and Japanese vocal clips, totaling 11.15 hours in duration.
It covers samples mainly in 16 kHz and a little in 24kHz and 44.1kHz.
To utilize SingMOS, you should use split.json and score.json. If you want to know more information, sys_info.json will give you the answer.
SingMOS arichitecture
|---SingMOS-Pro
|---wavs
|---sys0001-utt0001.wav
...
|---info
|---split.json
|---score.json
|---sys_info.json
|---metadata.csv
Structure of split.json:
{
dataset_name: {
"train": list for train set.,
"test": list for test set.,
}
}
Structure of score.json:
{
"system": {
"sys_id": {
"score": score of system.,
"ci": CI of score.,
}
...
},
"utterance": {
"utt_id": {
"sys_id": system id.,
"wav": wav path.,
"score": {
"mos": mos for utterance.,
"scores": list for judge scores.,
"judges": list for judge id.,
}
},
...
}
}
Structure of sys_info.json:
{
"sys_id": {
"type": system type including "svs" and "svc" "svc" "gt".,
"dataset": original dataset.
"model": generated model.
"sample_rate": sample rate.
"tag": {
"domain_id": batch of annotations,
"other_info": more information for system, including speaker transfer information for svc, number of codebook for codec, and so on. The "default" means no additional information to be supplemented.
}
}
}
updata infomation:
[2024.11.06] We release SingMOS. [2024.06.26] We release SingMOS_v1.
Citation:
@misc{tang2025singmosprocomprehensivebenchmarksinging,
title={SingMOS-Pro: An Comprehensive Benchmark for Singing Quality Assessment},
author={Yuxun Tang and Lan Liu and Wenhao Feng and Yiwen Zhao and Jionghao Han and Yifeng Yu and Jiatong Shi and Qin Jin},
year={2025},
eprint={2510.01812},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2510.01812},
}