Long-TTS-Eval / README.md
wcy1122's picture
Update README.md
fccd057 verified
metadata
language:
  - en
  - zh
license: apache-2.0
pretty_name: Long-TTS-Eval
task_categories:
  - text-to-speech
tags:
  - multimodal
  - speech-generation
  - voice-cloning
  - benchmark
  - long-form
dataset_info:
  features:
    - name: id
      dtype: string
    - name: category
      dtype: string
    - name: word_count
      dtype: int64
    - name: language
      dtype: string
    - name: text
      dtype: string
    - name: text_norm
      dtype: string
  splits:
    - name: long_tts_eval_en
      num_bytes: 3521741
      num_examples: 360
    - name: long_tts_eval_zh
      num_bytes: 2219738
      num_examples: 356
    - name: hard_tts_eval_en
      num_bytes: 577158
      num_examples: 262
    - name: hard_tts_eval_zh
      num_bytes: 476990
      num_examples: 265
  download_size: 4064205
  dataset_size: 6795627
configs:
  - config_name: default
    data_files:
      - split: long_tts_eval_en
        path: data/long_tts_eval_en-*
      - split: long_tts_eval_zh
        path: data/long_tts_eval_zh-*
      - split: hard_tts_eval_en
        path: data/hard_tts_eval_en-*
      - split: hard_tts_eval_zh
        path: data/hard_tts_eval_zh-*

Long-TTS-Eval Dataset (MGM-Omni Benchmark)

This repository hosts the Long-TTS-Eval dataset, a benchmark released as part of the MGM-Omni project. It is designed for evaluating long-form and complex Text-to-Speech (TTS) capabilities, as well as speech and audio understanding in both English and Chinese.

Dataset Description

The Long-TTS-Eval dataset is structured to facilitate comprehensive evaluation across different scenarios and languages. It includes the following splits:

  • long_tts_eval_en: Contains data for evaluating English long-form TTS.
  • long_tts_eval_zh: Contains data for evaluating Chinese long-form TTS.
  • hard_tts_eval_en: Contains data for evaluating English TTS on challenging, complex cases.
  • hard_tts_eval_zh: Contains data for evaluating Chinese TTS on challenging, complex cases.

Each data entry in the dataset includes:

  • id (string): Unique identifier for the sample.
  • category (string): The category of the text content.
  • word_count (int64): The number of words in the text.
  • language (string): The language of the text (either en for English or zh for Chinese).
  • text (string): The original text content.
  • text_norm (string): The normalized version of the text content.

Usage

Please refer here for the utilization of Long-TTS-Eval.

Evaluation Results

Model Date Model Size EN WER↓ ZH CER↓ EN-hard WER↓ ZH-hard WER↓
CosyVoice2(chunk) 2024-12 0.5B 14.80 5.27 42.48 32.76
MOSS-TTSD-v0.5 2025-06 6B 8.69 6.82 62.61 62.97
HiggsAudio-v2 2025-07 6B 27.09 31.39 98.61 98.85
MGM-Omni 2025-08 2B 4.98 5.58 26.26 23.58

Citation

If you find this repo useful for your research, we would appreciate it if you could cite our work 😊:

@article{wang2025mgm,
  title={MGM-Omni: Scaling Omni LLMs to Personalized Long-Horizon Speech},
  author={Wang, Chengyao and Zhong, Zhisheng and Peng, Bohao and Yang, Senqiao and Liu, Yuqi and Gui, Haokun and Xia, Bin and Li, Jingyao and Yu, Bei and Jia, Jiaya},
  journal={arXiv:2509.25131},
  year={2025}
}

@inproceedings{zhong2025lyra,
  title={Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition},
  author={Zhong, Zhingsheng and Wang, Chengyao and Liu, Yuqi and Yang, Senqiao and Tang, Longxiang and Zhang, Yuechen and Li, Jingyao and Qu, Tianyuan and Li, Yanwei and Chen, Yukang and Yu, Shaozuo and Wu, Sitong and Lo, Eric and Liu, Shu and Jia, Jiaya},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  year={2025}
}

@article{li2024mgm,
  title={Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models},
  author={Li, Yanwei and Zhang, Yuechen and Wang, Chengyao and Zhong, Zhisheng and Chen, Yixin and Chu, Ruihang and Liu, Shaoteng and Jia, Jiaya},
  journal={arXiv:2403.18814},
  year={2024}
}