EmoVoice / README.md
nielsr's picture
nielsr HF Staff
Improve model card: add pipeline tag, paper/project links, usage, and correct license
2aa2cac verified
|
raw
history blame
2.52 kB
metadata
license: cc-by-nc-4.0
pipeline_tag: text-to-speech
tags:
  - tts
  - speech-synthesis
  - emotion-control

EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting

EmoVoice is a novel emotion-controllable Text-to-Speech (TTS) model that exploits large language models (LLMs) to enable fine-grained freestyle natural language emotion control, and a phoneme boost variant design to enhance content consistency.

This model was presented in the paper: EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting.

For more details, check out the project page and the GitHub repository.

Installation

Create a separate environment if needed

conda create -n EmoVoice python=3.10
conda activate EmoVoice
pip install -r requirements.txt

Usage

Decode with checkpoints

bash examples/tts/scripts/inference_EmoVoice.sh
bash examples/tts/scripts/inference_EmoVoice-PP.sh
bash examples/tts/scripts/inference_EmoVoice_1.5B.sh

Train from scratch

# Fisrt Stage: Pretrain TTS
bash examples/tts/scripts/pretrain_EmoVoice.sh
bash examples/tts/scripts/pretrain_EmoVoice-PP.sh
bash examples/tts/scripts/pretrain_EmoVoice_1.5B.sh

# Second Stage: Finetune Emotional TTS
bash examples/tts/scripts/ft_EmoVoice.sh
bash examples/tts/scripts/ft_EmoVoice-PP.sh
bash examples/tts/scripts/ft_EmoVoice_1.5B.sh

Checkpoints

Dataset

Acknowledgements

Citation

If our work and codebase is useful for you, please cite as:

@article{yang2025emovoice,
  title={EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting},
  author={Yang, Guanrou and Yang, Chen and Chen, Qian and Ma, Ziyang and Chen, Wenxi and Wang, Wen and Wang, Tianrui and Yang, Yifan and Niu, Zhikang and Liu, Wenrui and others},
  journal={arXiv preprint arXiv:2504.12867},
  year={2025}
}