Update README.md
Browse files
README.md
CHANGED
|
@@ -1,180 +1,185 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
| 4 |
-
# MOSS-TTS Family
|
| 5 |
-
|
| 6 |
-
## Overview
|
| 7 |
-
MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
## Introduction
|
| 11 |
-
|
| 12 |
-
<p align="center">
|
| 13 |
-
<img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_tts_family_arch.jpeg" width="85%" />
|
| 14 |
-
</p>
|
| 15 |
-
|
| 16 |
-
When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
|
| 17 |
-
|
| 18 |
-
- **MOSS‑TTS**: MOSS-TTS is the flagship, production-ready Text-to-Speech foundation model in the MOSS-TTS Family, built to ship, scale, and deliver real-world voice applications beyond demos. It provides high-fidelity zero-shot voice cloning as the core capability, along with ultra-long speech generation, token-level duration control, multilingual and code-switched synthesis, and fine-grained Pinyin/phoneme pronunciation control. Together, these features make it a robust base model for scalable narration, dubbing, and voice-driven products.
|
| 19 |
-
- **MOSS‑TTSD**: MOSS-TTSD is a production-oriented long-form spoken dialogue generation model for creating highly expressive, multi-party conversational audio at scale. It supports continuous long-duration generation, flexible multi-speaker turn-taking control, and zero-shot voice cloning from short reference audio, enabling natural conversations with rich interaction dynamics. It is designed for real-world long-form content such as podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
|
| 20 |
-
- **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design system that generates speaker timbres directly from free-form text descriptions, enabling fast creation of voices for characters, personalities, and emotions—without requiring reference audio. It unifies timbre design, style control, and content synthesis in a single instruction-driven model, producing high-fidelity, emotionally expressive speech that feels naturally human. It can be used standalone for creative production, or as a voice design layer that improves integration and usability for downstream TTS systems.
|
| 21 |
-
- **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity sound effect generation model built for real-world content creation, offering strong environmental richness, broad category coverage, and reliable duration controllability. Trained on large-scale, high-quality data, it generates consistent audio from text prompts across natural ambience, urban scenes, creatures, human actions, and music-like clips. It is well suited for film and game production, interactive experiences, and data synthesis pipelines.
|
| 22 |
-
- **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS foundation model designed for real-time voice agents. Unlike conventional TTS that synthesizes replies in isolation, it conditions generation on multi-turn dialogue history—including both textual and acoustic signals from prior user speech—so responses stay coherent, consistent, and natural across turns. With low-latency incremental synthesis and strong voice stability, it enables truly conversational, human-like real-time speech experiences.
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
## Released Models
|
| 26 |
-
|
| 27 |
-
| Model | Architecture | Size | Model Card | Hugging Face |
|
| 28 |
-
|---|---|---:|---|---|
|
| 29 |
-
| **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
|
| 30 |
-
| | MossTTSLocal | 1.7B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
|
| 31 |
-
| **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
|
| 32 |
-
| **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
|
| 33 |
-
| **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
|
| 34 |
-
| **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
|
| 35 |
-
|
| 36 |
-
# MOSS Voice Generator Model Card
|
| 37 |
-
|
| 38 |
-
**MOSS Voice Generator** is an open-source voice generation system designed to enable the creation of custom speaker timbres from free-form textual descriptions. This model allows users to generate voices that reflect specific characters, personalities, and emotions. It is particularly notable for its ability to produce speech with natural-sounding emotional expressiveness, providing a realistic and nuanced listening experience. As an open-source tool, MOSS Voice Generator is suitable for a variety of applications, such as audiobooks, game dubbing, role-playing agents, and conversational assistants.
|
| 39 |
-
|
| 40 |
-
---
|
| 41 |
-
|
| 42 |
-
## 1. Overview
|
| 43 |
-
|
| 44 |
-
### 1.1 TTS Family Positioning
|
| 45 |
-
|
| 46 |
-
**MOSS Voice Generator** is a high-fidelity voice design tool within the broader TTS Family. It specializes in crafting expressive and natural-sounding voices from textual descriptions. Unlike traditional TTS systems relying on predefined voices or reference audio, MOSS Voice Generator enables zero-shot voice design, allowing for the creation of customized voices for a variety of applications, such as characters, audiobooks, games, or virtual assistants. Additionally, it can serve as a voice design layer for other TTS systems, addressing the challenge of finding suitable reference audio and improving integration and performance.
|
| 47 |
-
|
| 48 |
-
**Key Capabilities**
|
| 49 |
-
* **Highly expressive emotional delivery**: Aimed at generating voices with dynamic and nuanced emotional performances, allowing for natural shifts in tone, pace, and emotion.
|
| 50 |
-
* **Human-Like Naturalness** : Indistinguishable from real human speech with authentic breathing, pauses, and vocal nuances
|
| 51 |
-
* **Multilingual Support** : High-quality synthesis in Chinese and English
|
| 52 |
-
|
| 53 |
-
---
|
| 54 |
-
|
| 55 |
-
### 1.2 Model Architecture
|
| 56 |
-
**MOSS Voice Generator** employs MossTTSDelay architecture (see [moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md) for more details), where voice description instructions and the text to be synthesized are concatenated and jointly tokenized as input to drive speech generation, enabling unified modeling of timbre design, style control, and content synthesis. Through instruction-timbre alignment, the model learns the correspondence between textual descriptions and acoustic features, allowing it to generate high-fidelity speech with target timbre, emotion, and style directly from free-form text prompts—without requiring any reference audio.
|
| 57 |
-
|
| 58 |
-
### 1.3 Released Model
|
| 59 |
-
**Recommended decoding hyperparameters**
|
| 60 |
-
| Model | audio_temperature | audio_top_p | audio_top_k | audio_repetition_penalty |
|
| 61 |
-
|---|---:|---:|---:|---:|
|
| 62 |
-
| **MOSS-VoiceGenerator** | 1.5 | 0.6 | 50 | 1.1 |
|
| 63 |
-
|
| 64 |
-
---
|
| 65 |
-
|
| 66 |
-
## 2. Quick Start
|
| 67 |
-
|
| 68 |
-
```python
|
| 69 |
-
import os
|
| 70 |
-
from pathlib import Path
|
| 71 |
-
import torch
|
| 72 |
-
import torchaudio
|
| 73 |
-
from transformers import AutoModel, AutoProcessor
|
| 74 |
-
# Disable the broken cuDNN SDPA backend
|
| 75 |
-
torch.backends.cuda.enable_cudnn_sdp(False)
|
| 76 |
-
# Keep these enabled as fallbacks
|
| 77 |
-
torch.backends.cuda.enable_flash_sdp(True)
|
| 78 |
-
torch.backends.cuda.enable_mem_efficient_sdp(True)
|
| 79 |
-
torch.backends.cuda.enable_math_sdp(True)
|
| 80 |
-
|
| 81 |
-
pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-Voice-Generator"
|
| 82 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 83 |
-
dtype = torch.bfloat16 if device == "cuda" else torch.float32
|
| 84 |
-
|
| 85 |
-
processor = AutoProcessor.from_pretrained(
|
| 86 |
-
pretrained_model_name_or_path,
|
| 87 |
-
trust_remote_code=True,
|
| 88 |
-
normalize_inputs=True, # normalize text and instruction input
|
| 89 |
-
)
|
| 90 |
-
processor.audio_tokenizer = processor.audio_tokenizer.to(device)
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
# ====== Batch demo ======
|
| 94 |
-
text1="哎呀,我的老腰啊,这年纪大了就是不行了。"
|
| 95 |
-
instruction1="疲惫沙哑的老年声音缓慢抱怨,带有轻微呻吟。"
|
| 96 |
-
|
| 97 |
-
text2="亲爱的观众们,今天我要为大家做一道传说中的龙须面,这道面条细如发丝,需要极其精湛的手艺才能制作成功,请大家仔细观看我的每一个动作。"
|
| 98 |
-
instruction2="热情的美食节目主持人,语调生动活泼,充满对美食的热爱和专业精神。"
|
| 99 |
-
|
| 100 |
-
text3="Hey there, stranger! What brings you to our humble town? Looking for a good drink or a tall tale?"
|
| 101 |
-
instruction3="Hearty, jovial tavern owner's voice, loud and welcoming with a slightly gruff, friendly tone in American English, radiating warmth and hospitality."
|
| 102 |
-
|
| 103 |
-
text4="The quick brown fox jumps over the lazy dog."
|
| 104 |
-
instruction4="Clear, neutral voice for phonetic practice, even tempo and precise articulation in standard American English, emphasizing clarity of each word."
|
| 105 |
-
|
| 106 |
-
conversations = [
|
| 107 |
-
[processor.build_user_message(text=text1, instruction=instruction1)],
|
| 108 |
-
[processor.build_user_message(text=text2, instruction=instruction2)],
|
| 109 |
-
[processor.build_user_message(text=text3, instruction=instruction3)],
|
| 110 |
-
[processor.build_user_message(text=text4, instruction=instruction4)],
|
| 111 |
-
]
|
| 112 |
-
|
| 113 |
-
model = AutoModel.from_pretrained(
|
| 114 |
-
pretrained_model_name_or_path,
|
| 115 |
-
trust_remote_code=True,
|
| 116 |
-
attn_implementation="sdpa",
|
| 117 |
-
torch_dtype=dtype,
|
| 118 |
-
).to(device)
|
| 119 |
-
model.eval()
|
| 120 |
-
|
| 121 |
-
batch_size = 1
|
| 122 |
-
|
| 123 |
-
messages = []
|
| 124 |
-
save_dir = Path("inference_root")
|
| 125 |
-
save_dir.mkdir(exist_ok=True, parents=True)
|
| 126 |
-
sample_idx = 0
|
| 127 |
-
with torch.no_grad():
|
| 128 |
-
for start in range(0, len(conversations), batch_size):
|
| 129 |
-
batch_conversations = conversations[start : start + batch_size]
|
| 130 |
-
batch = processor(batch_conversations, mode="generation")
|
| 131 |
-
input_ids = batch["input_ids"].to(device)
|
| 132 |
-
attention_mask = batch["attention_mask"].to(device)
|
| 133 |
-
|
| 134 |
-
outputs = model.generate(
|
| 135 |
-
input_ids=input_ids,
|
| 136 |
-
attention_mask=attention_mask,
|
| 137 |
-
)
|
| 138 |
-
|
| 139 |
-
for message in processor.decode(outputs):
|
| 140 |
-
audio = message.audio_codes_list[0]
|
| 141 |
-
out_path = save_dir / f"sample{sample_idx}.wav"
|
| 142 |
-
sample_idx += 1
|
| 143 |
-
torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
|
| 144 |
-
|
| 145 |
-
```
|
| 146 |
-
|
| 147 |
-
### Input Types
|
| 148 |
-
|
| 149 |
-
**UserMessage**
|
| 150 |
-
|
| 151 |
-
| Field | Type | Required | Description |
|
| 152 |
-
|---|---|---:|---|
|
| 153 |
-
| `text` | `str` | Yes | Text to synthesize. Supports Chinese and English. |
|
| 154 |
-
| `instruction` | `str` | Yes | Specify the style or the synthesized speech. Users can provide detailed speech style instructions, such as emotion, speed, pitch, and voice characteristics. |
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
### Generation Hyperparameters
|
| 158 |
-
|
| 159 |
-
| Parameter | Type | Default | Description |
|
| 160 |
-
|---|---|---:|---|
|
| 161 |
-
| `audio_temperature` | `float` | 1.5 | Higher values increase variation; lower values stabilize prosody. |
|
| 162 |
-
| `audio_top_p` | `float` | 0.6 | Nucleus sampling cutoff. Lower values are more conservative. |
|
| 163 |
-
| `audio_top_k` | `int` | 50 | Top-K sampling. Lower values tighten sampling space. |
|
| 164 |
-
| `audio_repetition_penalty` | `float` | 1.1 | >1.0 discourages repeating patterns. |
|
| 165 |
-
|
| 166 |
-
> Note: MOSS-Voice-Generator is **sensitive to decoding hyperparameters**. See **Released Models** for recommended defaults.
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
---
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
## 3. Performance
|
| 173 |
-
|
| 174 |
-
MOSS Voice Generator demonstrates significant advantages in subjective evaluation. Using 160 internal test samples covering diverse voice styles, we established three independent evaluation dimensions: (1) **Overall Preference** - Which voice would you choose? (2) **Instruction Following** - Which audio best follows the instructions (gender, age, tone, emotion, accent, speed)? (3) **Naturalness** - Which audio sounds most like real human speech? Results show that **MOSS Voice Generator outperforms all TTS systems** that support zero predefined voices and customizable preview text across these three dimensions.
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 180 |
> Note: GPT4o-mini-tts and Gemini-2.5-pro-tts use predefined voice timbres
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
# MOSS-TTS Family
|
| 5 |
+
|
| 6 |
+
## Overview
|
| 7 |
+
MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
## Introduction
|
| 11 |
+
|
| 12 |
+
<p align="center">
|
| 13 |
+
<img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_tts_family_arch.jpeg" width="85%" />
|
| 14 |
+
</p>
|
| 15 |
+
|
| 16 |
+
When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
|
| 17 |
+
|
| 18 |
+
- **MOSS‑TTS**: MOSS-TTS is the flagship, production-ready Text-to-Speech foundation model in the MOSS-TTS Family, built to ship, scale, and deliver real-world voice applications beyond demos. It provides high-fidelity zero-shot voice cloning as the core capability, along with ultra-long speech generation, token-level duration control, multilingual and code-switched synthesis, and fine-grained Pinyin/phoneme pronunciation control. Together, these features make it a robust base model for scalable narration, dubbing, and voice-driven products.
|
| 19 |
+
- **MOSS‑TTSD**: MOSS-TTSD is a production-oriented long-form spoken dialogue generation model for creating highly expressive, multi-party conversational audio at scale. It supports continuous long-duration generation, flexible multi-speaker turn-taking control, and zero-shot voice cloning from short reference audio, enabling natural conversations with rich interaction dynamics. It is designed for real-world long-form content such as podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
|
| 20 |
+
- **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design system that generates speaker timbres directly from free-form text descriptions, enabling fast creation of voices for characters, personalities, and emotions—without requiring reference audio. It unifies timbre design, style control, and content synthesis in a single instruction-driven model, producing high-fidelity, emotionally expressive speech that feels naturally human. It can be used standalone for creative production, or as a voice design layer that improves integration and usability for downstream TTS systems.
|
| 21 |
+
- **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity sound effect generation model built for real-world content creation, offering strong environmental richness, broad category coverage, and reliable duration controllability. Trained on large-scale, high-quality data, it generates consistent audio from text prompts across natural ambience, urban scenes, creatures, human actions, and music-like clips. It is well suited for film and game production, interactive experiences, and data synthesis pipelines.
|
| 22 |
+
- **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS foundation model designed for real-time voice agents. Unlike conventional TTS that synthesizes replies in isolation, it conditions generation on multi-turn dialogue history—including both textual and acoustic signals from prior user speech—so responses stay coherent, consistent, and natural across turns. With low-latency incremental synthesis and strong voice stability, it enables truly conversational, human-like real-time speech experiences.
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
## Released Models
|
| 26 |
+
|
| 27 |
+
| Model | Architecture | Size | Model Card | Hugging Face |
|
| 28 |
+
|---|---|---:|---|---|
|
| 29 |
+
| **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
|
| 30 |
+
| | MossTTSLocal | 1.7B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
|
| 31 |
+
| **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
|
| 32 |
+
| **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
|
| 33 |
+
| **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
|
| 34 |
+
| **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
|
| 35 |
+
|
| 36 |
+
# MOSS Voice Generator Model Card
|
| 37 |
+
|
| 38 |
+
**MOSS Voice Generator** is an open-source voice generation system designed to enable the creation of custom speaker timbres from free-form textual descriptions. This model allows users to generate voices that reflect specific characters, personalities, and emotions. It is particularly notable for its ability to produce speech with natural-sounding emotional expressiveness, providing a realistic and nuanced listening experience. As an open-source tool, MOSS Voice Generator is suitable for a variety of applications, such as audiobooks, game dubbing, role-playing agents, and conversational assistants.
|
| 39 |
+
|
| 40 |
+
---
|
| 41 |
+
|
| 42 |
+
## 1. Overview
|
| 43 |
+
|
| 44 |
+
### 1.1 TTS Family Positioning
|
| 45 |
+
|
| 46 |
+
**MOSS Voice Generator** is a high-fidelity voice design tool within the broader TTS Family. It specializes in crafting expressive and natural-sounding voices from textual descriptions. Unlike traditional TTS systems relying on predefined voices or reference audio, MOSS Voice Generator enables zero-shot voice design, allowing for the creation of customized voices for a variety of applications, such as characters, audiobooks, games, or virtual assistants. Additionally, it can serve as a voice design layer for other TTS systems, addressing the challenge of finding suitable reference audio and improving integration and performance.
|
| 47 |
+
|
| 48 |
+
**Key Capabilities**
|
| 49 |
+
* **Highly expressive emotional delivery**: Aimed at generating voices with dynamic and nuanced emotional performances, allowing for natural shifts in tone, pace, and emotion.
|
| 50 |
+
* **Human-Like Naturalness** : Indistinguishable from real human speech with authentic breathing, pauses, and vocal nuances
|
| 51 |
+
* **Multilingual Support** : High-quality synthesis in Chinese and English
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
### 1.2 Model Architecture
|
| 56 |
+
**MOSS Voice Generator** employs MossTTSDelay architecture (see [moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md) for more details), where voice description instructions and the text to be synthesized are concatenated and jointly tokenized as input to drive speech generation, enabling unified modeling of timbre design, style control, and content synthesis. Through instruction-timbre alignment, the model learns the correspondence between textual descriptions and acoustic features, allowing it to generate high-fidelity speech with target timbre, emotion, and style directly from free-form text prompts—without requiring any reference audio.
|
| 57 |
+
|
| 58 |
+
### 1.3 Released Model
|
| 59 |
+
**Recommended decoding hyperparameters**
|
| 60 |
+
| Model | audio_temperature | audio_top_p | audio_top_k | audio_repetition_penalty |
|
| 61 |
+
|---|---:|---:|---:|---:|
|
| 62 |
+
| **MOSS-VoiceGenerator** | 1.5 | 0.6 | 50 | 1.1 |
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
+
## 2. Quick Start
|
| 67 |
+
|
| 68 |
+
```python
|
| 69 |
+
import os
|
| 70 |
+
from pathlib import Path
|
| 71 |
+
import torch
|
| 72 |
+
import torchaudio
|
| 73 |
+
from transformers import AutoModel, AutoProcessor
|
| 74 |
+
# Disable the broken cuDNN SDPA backend
|
| 75 |
+
torch.backends.cuda.enable_cudnn_sdp(False)
|
| 76 |
+
# Keep these enabled as fallbacks
|
| 77 |
+
torch.backends.cuda.enable_flash_sdp(True)
|
| 78 |
+
torch.backends.cuda.enable_mem_efficient_sdp(True)
|
| 79 |
+
torch.backends.cuda.enable_math_sdp(True)
|
| 80 |
+
|
| 81 |
+
pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-Voice-Generator"
|
| 82 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 83 |
+
dtype = torch.bfloat16 if device == "cuda" else torch.float32
|
| 84 |
+
|
| 85 |
+
processor = AutoProcessor.from_pretrained(
|
| 86 |
+
pretrained_model_name_or_path,
|
| 87 |
+
trust_remote_code=True,
|
| 88 |
+
normalize_inputs=True, # normalize text and instruction input
|
| 89 |
+
)
|
| 90 |
+
processor.audio_tokenizer = processor.audio_tokenizer.to(device)
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
# ====== Batch demo ======
|
| 94 |
+
text1="哎呀,我的老腰啊,这年纪大了就是不行了。"
|
| 95 |
+
instruction1="疲惫沙哑的老年声音缓慢抱怨,带有轻微呻吟。"
|
| 96 |
+
|
| 97 |
+
text2="亲爱的观众们,今天我要为大家做一道传说中的龙须面,这道面条细如发丝,需要极其精湛的手艺才能制作成功,请大家仔细观看我的每一个动作。"
|
| 98 |
+
instruction2="热情的美食节目主持人,语调生动活泼,充满对美食的热爱和专业精神。"
|
| 99 |
+
|
| 100 |
+
text3="Hey there, stranger! What brings you to our humble town? Looking for a good drink or a tall tale?"
|
| 101 |
+
instruction3="Hearty, jovial tavern owner's voice, loud and welcoming with a slightly gruff, friendly tone in American English, radiating warmth and hospitality."
|
| 102 |
+
|
| 103 |
+
text4="The quick brown fox jumps over the lazy dog."
|
| 104 |
+
instruction4="Clear, neutral voice for phonetic practice, even tempo and precise articulation in standard American English, emphasizing clarity of each word."
|
| 105 |
+
|
| 106 |
+
conversations = [
|
| 107 |
+
[processor.build_user_message(text=text1, instruction=instruction1)],
|
| 108 |
+
[processor.build_user_message(text=text2, instruction=instruction2)],
|
| 109 |
+
[processor.build_user_message(text=text3, instruction=instruction3)],
|
| 110 |
+
[processor.build_user_message(text=text4, instruction=instruction4)],
|
| 111 |
+
]
|
| 112 |
+
|
| 113 |
+
model = AutoModel.from_pretrained(
|
| 114 |
+
pretrained_model_name_or_path,
|
| 115 |
+
trust_remote_code=True,
|
| 116 |
+
attn_implementation="sdpa",
|
| 117 |
+
torch_dtype=dtype,
|
| 118 |
+
).to(device)
|
| 119 |
+
model.eval()
|
| 120 |
+
|
| 121 |
+
batch_size = 1
|
| 122 |
+
|
| 123 |
+
messages = []
|
| 124 |
+
save_dir = Path("inference_root")
|
| 125 |
+
save_dir.mkdir(exist_ok=True, parents=True)
|
| 126 |
+
sample_idx = 0
|
| 127 |
+
with torch.no_grad():
|
| 128 |
+
for start in range(0, len(conversations), batch_size):
|
| 129 |
+
batch_conversations = conversations[start : start + batch_size]
|
| 130 |
+
batch = processor(batch_conversations, mode="generation")
|
| 131 |
+
input_ids = batch["input_ids"].to(device)
|
| 132 |
+
attention_mask = batch["attention_mask"].to(device)
|
| 133 |
+
|
| 134 |
+
outputs = model.generate(
|
| 135 |
+
input_ids=input_ids,
|
| 136 |
+
attention_mask=attention_mask,
|
| 137 |
+
)
|
| 138 |
+
|
| 139 |
+
for message in processor.decode(outputs):
|
| 140 |
+
audio = message.audio_codes_list[0]
|
| 141 |
+
out_path = save_dir / f"sample{sample_idx}.wav"
|
| 142 |
+
sample_idx += 1
|
| 143 |
+
torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
|
| 144 |
+
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
### Input Types
|
| 148 |
+
|
| 149 |
+
**UserMessage**
|
| 150 |
+
|
| 151 |
+
| Field | Type | Required | Description |
|
| 152 |
+
|---|---|---:|---|
|
| 153 |
+
| `text` | `str` | Yes | Text to synthesize. Supports Chinese and English. |
|
| 154 |
+
| `instruction` | `str` | Yes | Specify the style or the synthesized speech. Users can provide detailed speech style instructions, such as emotion, speed, pitch, and voice characteristics. |
|
| 155 |
+
|
| 156 |
+
|
| 157 |
+
### Generation Hyperparameters
|
| 158 |
+
|
| 159 |
+
| Parameter | Type | Default | Description |
|
| 160 |
+
|---|---|---:|---|
|
| 161 |
+
| `audio_temperature` | `float` | 1.5 | Higher values increase variation; lower values stabilize prosody. |
|
| 162 |
+
| `audio_top_p` | `float` | 0.6 | Nucleus sampling cutoff. Lower values are more conservative. |
|
| 163 |
+
| `audio_top_k` | `int` | 50 | Top-K sampling. Lower values tighten sampling space. |
|
| 164 |
+
| `audio_repetition_penalty` | `float` | 1.1 | >1.0 discourages repeating patterns. |
|
| 165 |
+
|
| 166 |
+
> Note: MOSS-Voice-Generator is **sensitive to decoding hyperparameters**. See **Released Models** for recommended defaults.
|
| 167 |
+
|
| 168 |
+
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+
|
| 172 |
+
## 3. Performance
|
| 173 |
+
|
| 174 |
+
MOSS Voice Generator demonstrates significant advantages in subjective evaluation. Using 160 internal test samples covering diverse voice styles, we established three independent evaluation dimensions: (1) **Overall Preference** - Which voice would you choose? (2) **Instruction Following** - Which audio best follows the instructions (gender, age, tone, emotion, accent, speed)? (3) **Naturalness** - Which audio sounds most like real human speech? Results show that **MOSS Voice Generator outperforms all TTS systems** that support zero predefined voices and customizable preview text across these three dimensions.
|
| 175 |
+
<p align="center">
|
| 176 |
+
<img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_voiceGenerator_winrate" width="85%" />
|
| 177 |
+
</p>
|
| 178 |
+
|
| 179 |
+
|
| 180 |
+
|
| 181 |
+
Results on InstructTTSEval Benchmark also highlight the strong competitive edge of MOSS Voice Generator.
|
| 182 |
+
<p align="center">
|
| 183 |
+
<img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_voiceGenerator_instructttseval" width="85%" />
|
| 184 |
+
</p>
|
| 185 |
> Note: GPT4o-mini-tts and Gemini-2.5-pro-tts use predefined voice timbres
|