TangRain commited on
Commit
1241f08
Β·
verified Β·
1 Parent(s): c15422a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -54
README.md CHANGED
@@ -10,98 +10,121 @@ size_categories:
10
  - 1B<n<10B
11
  ---
12
 
13
- **NOTICE[Important!!!!]: We release the official version of [SingMOS-Pro](https://huggingface.co/datasets/TangRain/SingMOS-Pro).**
 
 
14
 
 
15
 
16
- -------------------- Previous Information -----------------------
17
-
 
 
18
 
19
- paper link: [SingMOS-Pro: An Comprehensive Benchmark for Singing Quality Assessment](https://arxiv.org/abs/2510.01812)
20
 
21
- If you want to use the dataset of **the singing track in VoiceMOS 2024**, you can visit **[SingMOS_v1](https://huggingface.co/datasets/TangRain/SingMOS_v1).**
 
 
22
 
23
- If you want to use our **pretrained SingMOS model**, you can visit our repo at [Singing MOS Predictor](https://github.com/South-Twilight/SingMOS/tree/main).
 
 
 
24
 
 
25
 
26
- # Overview
27
- **SingMOS-Pro** includes 7,981 Chinese and Japanese vocal clips, totaling 11.15 hours in duration.
28
 
29
- It covers samples mainly in 16 kHz and a little in 24kHz and 44.1kHz.
 
 
 
 
 
 
 
 
30
 
 
31
 
32
- **To utilize SingMOS, you should use `split.json` and `score.json`. If you want to know more information, `sys_info.json` will give you the answer.**
33
- # SingMOS arichitecture
34
- ```
35
- |---SingMOS-Pro
36
- |---wavs
37
- |---sys0001-utt0001.wav
38
- ...
39
- |---info
40
- |---split.json
41
- |---score.json
42
- |---sys_info.json
43
- |---metadata.csv
44
- ```
45
- Structure of `split.json`:
46
  ```json
47
  {
48
- dataset_name: {
49
- "train": list for train set.,
50
- "test": list for test set.,
51
  }
52
  }
53
- ```
54
- Structure of `score.json`:
 
 
55
  ```json
56
  {
57
- "system": {
58
  "sys_id": {
59
- "score": score of system.,
60
- "ci": CI of score.,
61
  }
62
- ...
63
  },
64
  "utterance": {
65
  "utt_id": {
66
- "sys_id": system id.,
67
- "wav": wav path.,
68
  "score": {
69
- "mos": mos for utterance.,
70
- "scores": list for judge scores.,
71
- "judges": list for judge id.,
72
  }
73
- },
74
- ...
75
  }
76
  }
77
  ```
78
- Structure of `sys_info.json`:
 
 
79
  ```json
80
  {
81
  "sys_id": {
82
- "type": system type including "svs" and "svc" "svc" "gt".,
83
- "dataset": original dataset.
84
- "model": generated model.
85
- "sample_rate": sample rate.
86
  "tag": {
87
- "domain_id": batch of annotations,
88
- "other_info": more information for system, including speaker transfer information for svc, number of codebook for codec, and so on. The "default" means no additional information to be supplemented.
89
  }
90
  }
91
  }
92
  ```
93
- # updata infomation:
94
- *[2024.11.06]* We release **SingMOS**.
95
- *[2024.06.26]* We release **SingMOS_v1**.
96
- # Citation:
97
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  @misc{tang2025singmosprocomprehensivebenchmarksinging,
99
- title={SingMOS-Pro: An Comprehensive Benchmark for Singing Quality Assessment},
100
  author={Yuxun Tang and Lan Liu and Wenhao Feng and Yiwen Zhao and Jionghao Han and Yifeng Yu and Jiatong Shi and Qin Jin},
101
  year={2025},
102
  eprint={2510.01812},
103
  archivePrefix={arXiv},
104
  primaryClass={cs.SD},
105
- url={https://arxiv.org/abs/2510.01812},
106
  }
107
- ```
 
 
10
  - 1B<n<10B
11
  ---
12
 
13
+ # SingMOS-Pro 🎡
14
+ **[Important Notice]**
15
+ We have officially released the **[SingMOS-Pro dataset](https://huggingface.co/datasets/TangRain/SingMOS-Pro)**.
16
 
17
+ ---
18
 
19
+ ## πŸ” Related Resources
20
+ - **Paper:** [*SingMOS-Pro: A Comprehensive Benchmark for Singing Quality Assessment*](https://arxiv.org/abs/2510.01812)
21
+ - **Earlier Dataset (VoiceMOS 2024 Track):** [SingMOS_v1](https://huggingface.co/datasets/TangRain/SingMOS_v1)
22
+ - **Pretrained Model:** [Singing MOS Predictor](https://github.com/South-Twilight/SingMOS/tree/main)
23
 
24
+ ---
25
 
26
+ ## πŸ“˜ Overview
27
+ **SingMOS-Pro** contains **7,981** Chinese and Japanese singing clips, totaling **11.15 hours** of audio.
28
+ The recordings are mainly at **16 kHz**, with some at **24 kHz** and **44.1 kHz**.
29
 
30
+ To use the dataset properly, please refer to the following files:
31
+ - `split.json` β€” dataset split information (train/test)
32
+ - `score.json` β€” system- and utterance-level MOS scores
33
+ - `sys_info.json` β€” metadata for each system
34
 
35
+ ---
36
 
37
+ ## πŸ“‚ Dataset Structure
38
+ ```
39
 
40
+ SingMOS-Pro
41
+ β”œβ”€β”€ wavs
42
+ β”‚ β”œβ”€β”€ sys0001-utt0001.wav
43
+ β”‚ β”œβ”€β”€ ...
44
+ β”œβ”€β”€ info
45
+ β”‚ β”œβ”€β”€ split.json
46
+ β”‚ β”œβ”€β”€ score.json
47
+ β”‚ β”œβ”€β”€ sys_info.json
48
+ └── metadata.csv
49
 
50
+ ````
51
 
52
+ ### Structure of `split.json`
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ```json
54
  {
55
+ "dataset_name": {
56
+ "train": [...],
57
+ "test": [...]
58
  }
59
  }
60
+ ````
61
+
62
+ ### Structure of `score.json`
63
+
64
  ```json
65
  {
66
+ "system": {
67
  "sys_id": {
68
+ "score": float,
69
+ "ci": float
70
  }
 
71
  },
72
  "utterance": {
73
  "utt_id": {
74
+ "sys_id": str,
75
+ "wav": str,
76
  "score": {
77
+ "mos": float,
78
+ "scores": [float],
79
+ "judges": [str]
80
  }
81
+ }
 
82
  }
83
  }
84
  ```
85
+
86
+ ### Structure of `sys_info.json`
87
+
88
  ```json
89
  {
90
  "sys_id": {
91
+ "type": "svs" | "svc" | "gt",
92
+ "dataset": str,
93
+ "model": str,
94
+ "sample_rate": int,
95
  "tag": {
96
+ "domain_id": str,
97
+ "other_info": str
98
  }
99
  }
100
  }
101
  ```
102
+
103
+ > **Note:** `"default"` in `"other_info"` means no additional information is provided.
104
+
105
+ ---
106
+
107
+ ## πŸ†• Update Log
108
+
109
+ * **[2025-10-09]** Released **SingMOS-Pro**
110
+ * **[2024-11-06]** Released **SingMOS**
111
+ * **[2024-06-26]** Released **SingMOS_v1**
112
+
113
+ ---
114
+
115
+ ## πŸ“– Citation
116
+
117
+ If you use this dataset, please cite our paper:
118
+
119
+ ```bibtex
120
  @misc{tang2025singmosprocomprehensivebenchmarksinging,
121
+ title={SingMOS-Pro: A Comprehensive Benchmark for Singing Quality Assessment},
122
  author={Yuxun Tang and Lan Liu and Wenhao Feng and Yiwen Zhao and Jionghao Han and Yifeng Yu and Jiatong Shi and Qin Jin},
123
  year={2025},
124
  eprint={2510.01812},
125
  archivePrefix={arXiv},
126
  primaryClass={cs.SD},
127
+ url={https://arxiv.org/abs/2510.01812}
128
  }
129
+ ```
130
+