RMSnow commited on
Commit
b8a98ef
·
verified ·
1 Parent(s): cc9b8a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -18,12 +18,13 @@ datasets:
18
 
19
  # SpeechJudge: Towards Human-Level Judgment for Speech Naturalness
20
 
21
- [![arXiv](https://img.shields.io/badge/arXiv-2511.07931-b31b1b.svg)](https://arxiv.org/abs/2511.07931)
22
- [![Demo Page](https://img.shields.io/badge/Project-Demo_Page-blue)](https://speechjudge.github.io/)
23
- [![GitHub](https://img.shields.io/badge/GitHub-SpeechJudge-black?logo=github)](https://github.com/AmphionTeam/SpeechJudge)
24
- [![Model](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/RMSnow/SpeechJudge-GRM)
25
- [![Data](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-yellow)](https://huggingface.co/datasets/RMSnow/SpeechJudge-Data)
26
- [![Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-yellow)](https://huggingface.co/spaces/yuantuo666/SpeechJudge-GRM)
 
27
 
28
  Aligning large generative models with human feedback is a critical challenge. In speech synthesis, this is particularly pronounced due to the lack of a large-scale human preference dataset, which hinders the development of models that truly align with human perception. To address this, we introduce **SpeechJudge**, a comprehensive suite comprising a dataset, a benchmark, and a reward model centered on ***naturalness***—one of the most fundamental subjective metrics for speech synthesis:
29
 
 
18
 
19
  # SpeechJudge: Towards Human-Level Judgment for Speech Naturalness
20
 
21
+ [![Paper](https://img.shields.io/badge/PAPER-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2511.07931)
22
+ [![Demo](https://img.shields.io/badge/DEMO_PAGE-1f6feb?style=for-the-badge&logo=googlechrome&logoColor=white)](https://speechjudge.github.io/)
23
+ [![GitHub](https://img.shields.io/badge/GITHUB-000000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/AmphionTeam/SpeechJudge)
24
+ [![HF Model](https://img.shields.io/badge/HF_MODEL-FFD14D?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/RMSnow/SpeechJudge-GRM)
25
+ [![HF Data](https://img.shields.io/badge/HF_DATA-FFD14D?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/datasets/RMSnow/SpeechJudge-Data)
26
+ [![HF Space](https://img.shields.io/badge/HF_SPACE-FFD14D?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/spaces/yuantuo666/SpeechJudge-GRM)
27
+
28
 
29
  Aligning large generative models with human feedback is a critical challenge. In speech synthesis, this is particularly pronounced due to the lack of a large-scale human preference dataset, which hinders the development of models that truly align with human perception. To address this, we introduce **SpeechJudge**, a comprehensive suite comprising a dataset, a benchmark, and a reward model centered on ***naturalness***—one of the most fundamental subjective metrics for speech synthesis:
30