Datasets:
Improve dataset card for Long-TTS-Eval (MGM-Omni Benchmark)
#2
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,5 +1,19 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
dataset_info:
|
| 4 |
features:
|
| 5 |
- name: id
|
|
@@ -40,8 +54,137 @@ configs:
|
|
| 40 |
path: data/hard_tts_eval_en-*
|
| 41 |
- split: hard_tts_eval_zh
|
| 42 |
path: data/hard_tts_eval_zh-*
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
-
|
| 46 |
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
license: apache-2.0
|
| 6 |
+
pretty_name: Long-TTS-Eval
|
| 7 |
+
task_categories:
|
| 8 |
+
- automatic-speech-recognition
|
| 9 |
+
- text-to-speech
|
| 10 |
+
tags:
|
| 11 |
+
- multimodal
|
| 12 |
+
- speech-generation
|
| 13 |
+
- speech-understanding
|
| 14 |
+
- voice-cloning
|
| 15 |
+
- benchmark
|
| 16 |
+
- long-form
|
| 17 |
dataset_info:
|
| 18 |
features:
|
| 19 |
- name: id
|
|
|
|
| 54 |
path: data/hard_tts_eval_en-*
|
| 55 |
- split: hard_tts_eval_zh
|
| 56 |
path: data/hard_tts_eval_zh-*
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
# Long-TTS-Eval Dataset (MGM-Omni Benchmark)
|
| 60 |
+
|
| 61 |
+
This repository hosts the **Long-TTS-Eval** dataset, a benchmark released as part of the [MGM-Omni](https://huggingface.co/papers/2509.25131) project. It is designed for evaluating long-form and complex Text-to-Speech (TTS) capabilities, as well as speech and audio understanding in both English and Chinese.
|
| 62 |
+
|
| 63 |
+
* **Paper:** [MGM-Omni: Scaling Omni LLMs to Personalized Long-Horizon Speech](https://huggingface.co/papers/2509.25131)
|
| 64 |
+
* **GitHub Repository:** [https://github.com/dvlab-research/MGM-Omni](https://github.com/dvlab-research/MGM-Omni)
|
| 65 |
+
* **Project Page / Demo:** [https://huggingface.co/spaces/wcy1122/MGM-Omni](https://huggingface.co/spaces/wcy1122/MGM-Omni)
|
| 66 |
+
|
| 67 |
+
## Paper Abstract
|
| 68 |
+
|
| 69 |
+
We present MGM-Omni, a unified Omni LLM for omni-modal understanding and expressive, long-horizon speech generation. Unlike cascaded pipelines that isolate speech synthesis, MGM-Omni adopts a "brain-mouth" design with a dual-track, token-based architecture that cleanly decouples multimodal reasoning from real-time speech generation. This design enables efficient cross-modal interaction and low-latency, streaming speech generation. For understanding, a unified training strategy coupled with a dual audio encoder design enables long-form audio perception across diverse acoustic conditions. For generation, a chunk-based parallel decoding scheme narrows the text speech token-rate gap, accelerating inference and supporting streaming zero-shot voice cloning with stable timbre over extended durations. Compared to concurrent work, MGM-Omni achieves these capabilities with markedly data-efficient training. Extensive experiments demonstrate that MGM-Omni outperforms existing open source models in preserving timbre identity across extended sequences, producing natural and context-aware speech, and achieving superior long-form audio and omnimodal understanding. MGM-Omni establishes an efficient, end-to-end paradigm for omnimodal understanding and controllable, personalised long-horizon speech generation.
|
| 70 |
+
|
| 71 |
+
## Dataset Description
|
| 72 |
+
|
| 73 |
+
The `Long-TTS-Eval` dataset is structured to facilitate comprehensive evaluation across different scenarios and languages. It includes the following splits:
|
| 74 |
+
|
| 75 |
+
* `long_tts_eval_en`: Contains data for evaluating English long-form TTS.
|
| 76 |
+
* `long_tts_eval_zh`: Contains data for evaluating Chinese long-form TTS.
|
| 77 |
+
* `hard_tts_eval_en`: Contains data for evaluating English TTS on challenging, complex cases.
|
| 78 |
+
* `hard_tts_eval_zh`: Contains data for evaluating Chinese TTS on challenging, complex cases.
|
| 79 |
+
|
| 80 |
+
Each data entry in the dataset includes:
|
| 81 |
+
* `id` (string): Unique identifier for the sample.
|
| 82 |
+
* `category` (string): The category of the text content.
|
| 83 |
+
* `word_count` (int64): The number of words in the text.
|
| 84 |
+
* `language` (string): The language of the text (either `en` for English or `zh` for Chinese).
|
| 85 |
+
* `text` (string): The original text content.
|
| 86 |
+
* `text_norm` (string): The normalized version of the text content.
|
| 87 |
+
|
| 88 |
+
## Main Properties (of the associated MGM-Omni system)
|
| 89 |
+
1. **Omni-modality supports**: MGM-Omni supports audio, video, image, and text inputs, understands long contexts, and can generate both text and speech outputs, making it a truly versatile multi-modal AI assistant.
|
| 90 |
+
2. **Long-form Speech Understanding**: Unlike most existing open-source multi-modal models, which typically fail with inputs longer than 15 minutes, MGM-Omni can handle hour-long speech inputs while delivering superior overall and detailed understanding and performance!
|
| 91 |
+
3. **Long-form Speech Generation**: With a treasure trove of training data and smart Chunk-Based Decoding, MGM-Omni can generate over 10 minutes of smooth, natural speech for continuous storytelling.
|
| 92 |
+
4. **Streaming Generation**: Thanks to the parallel decoding approach for speech tokens, MGM-Omni enables efficient and smooth streaming audio, making it suitable for live conversations.
|
| 93 |
+
5. **Zero-shot Voice Cloning**: With MGM-Omni’s extensive and diverse audio training, you can create a customized voice clone by simply recording a short clip (around 10 seconds) and reviewing the results.
|
| 94 |
+
6. **Fully Open-source**: All the code, models, and training data will be released.
|
| 95 |
+
|
| 96 |
+
## Sample Usage
|
| 97 |
+
|
| 98 |
+
### Zero-Shot Voice Cloning
|
| 99 |
+
|
| 100 |
+
Generate audio that sounds similar to the provided reference audio using the associated `MGM-Omni-TTS-2B-0927` model.
|
| 101 |
+
|
| 102 |
+
```bash
|
| 103 |
+
python -m mgm.serve.cli_tts \
|
| 104 |
+
--model wcy1122/MGM-Omni-TTS-2B-0927 \
|
| 105 |
+
--ref-audio assets/ref_audio/Man_EN.wav
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
Add `--ref-audio-text` for a more accurate reference audio transcript. Otherwise, Whisper-large-v3 will be used for automatic transcription.
|
| 109 |
+
|
| 110 |
+
### Chat as an Omni chatbot (Text Input)
|
| 111 |
+
|
| 112 |
+
```bash
|
| 113 |
+
python -m mgm.serve.cli \
|
| 114 |
+
--model wcy1122/MGM-Omni-7B \
|
| 115 |
+
--speechlm wcy1122/MGM-Omni-TTS-2B-0927
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
Add `--ref-audio` and `--ref-audio-text` (optional) if you want MGM-Omni to respond with a specific voice.
|
| 119 |
+
|
| 120 |
+
## Evaluation
|
| 121 |
+
|
| 122 |
+
The MGM-Omni paper presents evaluation results for both speech and audio understanding and speech generation using benchmarks like `Long-TTS-Eval`.
|
| 123 |
+
|
| 124 |
+
### Speech and Audio Understanding
|
| 125 |
+
|
| 126 |
+
| Model | Date | LS-clean↓ | LS-other↓ | CM-EN↓ | CM-ZH↓ | AISHELL↓ |
|
| 127 |
+
|:---|:---|:----------|:----------|:-------|:-------|:---------|
|
| 128 |
+
| Mini-Omni2 | 2024-11 | 4.7 | 9.4 | - | - | - |
|
| 129 |
+
| Lyra | 2024-12 | 2.0 | 4.0 | - | - | - |
|
| 130 |
+
| VITA-1.5 | 2025-01 | 3.4 | 7.5 | - | - | 2.2 |
|
| 131 |
+
| Qwen2.5-Omni | 2025-03 | 1.6 | 3.5 | **7.6** | 5.2 | - |
|
| 132 |
+
| Ola | 2025-06 | 1.9 | 4.3 | - | - | - |
|
| 133 |
+
| **MGM-Omni-7B** | 2025-08 | 1.7 | 3.6 | 8.8 | 4.5 | 1.9 |
|
| 134 |
+
| **MGM-Omni-32B** | 2025-08 | **1.5** | **3.2** | 8.0 | **4.0** | **1.8** |
|
| 135 |
+
|
| 136 |
+
This table presents WER and CER results on speech understanding. Here LS refers to LibriSpeech and CM refers to Common Voice.
|
| 137 |
+
|
| 138 |
+
| Model | Date | Speech↑ | Sound↑ | Music↑ | Mix↑ | Average↑ |
|
| 139 |
+
|:---|:---|:--------|:-------|:-------|:-------|:---------|
|
| 140 |
+
| LLaMA-Omni | 2024-08 | 5.2 | 5.3 | 4.3 | 4.0 | 4.7 |
|
| 141 |
+
| Mini-Omni2 | 2024-11 | 3.6 | 3.5 | 2.6 | 3.1 | 3.2 |
|
| 142 |
+
| IXC2.5-OmniLive | 2024-12 | 1.6 | 1.8 | 1.7 | 1.6 | 1.7 |
|
| 143 |
+
| VITA-1.5 | 2025-01 | 4.8 | 5.5 | 4.9 | 2.9 | 4.5 |
|
| 144 |
+
| Qwen2.5-Omni | 2025-03 | 6.8 | 5.7 | 4.8 | 5.4 | 5.7 |
|
| 145 |
+
| Ola | 2025-06 | **7.3** | 6.4 | 5.9 | 6.0 | 6.4 |
|
| 146 |
+
| **MGM-Omni-7B** | 2025-08 | **7.3** | **6.5** | **6.3** | 6.1 | **6.5** |
|
| 147 |
+
| **MGM-Omni-32B** | 2025-08 | 7.1 | **6.5** | 6.2 | **6.2** | **6.5** |
|
| 148 |
+
|
| 149 |
+
This table presents evaluation results on AIR-Bench Chat (speech, sound, music, etc.).
|
| 150 |
+
|
| 151 |
+
### Speech Generation
|
| 152 |
+
|
| 153 |
+
| Model | Date | Model Size | CER↓ | SS(ZH)↑ | WER↓ | SS(EN)↑ |
|
| 154 |
+
|:---|:---|:-----------|:-----|:--------|:-----|:--------|
|
| 155 |
+
| CosyVoice2 | 2024-12 | 0.5B | 1.45 | 0.748 | 2.57 | 0.652 |
|
| 156 |
+
| Qwen2.5-Omni-3B | 2025-03 | 0.5B | 1.58 | 0.744 | 2.51 | 0.635 |
|
| 157 |
+
| Qwen2.5-Omni-7B | 2025-03 | 2B | 1.42 | 0.754 | 2.33 | 0.641 |
|
| 158 |
+
| MOSS-TTSD-v0 | 2025-06 | 2B | 2.18 | 0.594 | 2.46 | 0.476 |
|
| 159 |
+
| HiggsAudio-v2 | 2025-07 | 6B | 1.66 | 0.743 | 2.44 | 0.677 |
|
| 160 |
+
| **MGM-Omni** | 2025-08 | 0.6B | 1.49 | 0.749 | 2.54 | 0.670 |
|
| 161 |
+
| **MGM-Omni** | 2025-08 | 2B | 1.38 | 0.753 | 2.28 | 0.682 |
|
| 162 |
+
| **MGM-Omni** | 2025-08 | 4B | **1.34** | **0.756** | **2.22** | **0.684** |
|
| 163 |
+
|
| 164 |
+
This table presents evaluation results on speech generation on seed-tts-eval. For Qwen2.5-Omni, model size refers to the size of the talker.
|
| 165 |
+
|
| 166 |
+
## Citation
|
| 167 |
+
|
| 168 |
+
If you find this repo useful for your research, we would appreciate it if you could cite our work 😊:
|
| 169 |
+
```bibtex
|
| 170 |
+
@article{wang2025mgmomni,
|
| 171 |
+
title={MGM-Omni: Scaling Omni LLMs to Personalized Long-Horizon Speech},
|
| 172 |
+
author={Wang, Chengyao and Zhong, Zhisheng and Peng, Bohao and Yang, Senqiao and Liu, Yuqi and Gui, Haokun and Xia, Bin and Li, Jingyao and Yu, Bei and Jia, Jiaya},
|
| 173 |
+
journal={arXiv:2509.25131},
|
| 174 |
+
year={2025}
|
| 175 |
+
}
|
| 176 |
+
|
| 177 |
+
@inproceedings{zhong2025lyra,
|
| 178 |
+
title={Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition},
|
| 179 |
+
author={Zhong, Zhingsheng and Wang, Chengyao and Liu, Yuqi and Yang, Senqiao and Tang, Longxiang and Zhang, Yuechen and Li, Jingyao and Qu, Tianyuan and Li, Yanwei and Chen, Yukang and Yu, Shaozuo and Wu, Sitong and Lo, Eric and Liu, Shu and Jia, Jiaya},
|
| 180 |
+
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
|
| 181 |
+
year={2025}
|
| 182 |
+
}
|
| 183 |
+
|
| 184 |
+
@article{li2024mgm,
|
| 185 |
+
title={Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models},
|
| 186 |
+
author={Li, Yanwei and Zhang, Yuechen and Wang, Chengyao and Zhong, Zhisheng and Chen, Yixin and Chu, Ruihang and Liu, Shaoteng and Jia, Jiaya},
|
| 187 |
+
journal={arXiv:2403.18814},
|
| 188 |
+
year={2024}
|
| 189 |
+
}
|
| 190 |
+
```
|