Cqy2019 commited on
Commit
36c6df5
·
verified ·
1 Parent(s): e2569ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +549 -528
README.md CHANGED
@@ -1,528 +1,549 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - text-to-speech
5
- ---
6
- # MOSS-TTS Family
7
-
8
-
9
- <br>
10
-
11
- <p align="center">
12
- &nbsp;&nbsp;&nbsp;&nbsp;
13
- <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/openmoss_x_mosi" height="50" align="middle" />
14
- </p>
15
-
16
-
17
-
18
- <div align="center">
19
- <a href="https://github.com/OpenMOSS/MOSS-TTS/tree/main"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
20
- <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
21
- <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
22
- <a href="https://github.com/OpenMOSS/MOSS-TTS"><img src="https://img.shields.io/badge/Arxiv-Coming%20soon-red?logo=arxiv&amp"></a>
23
-
24
- <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
25
- <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
26
- <a href="https://x.com/Open_MOSS"><img src="https://img.shields.io/badge/Twitter-Follow-black?logo=x&amp"></a>
27
- <a href="https://discord.gg/fvm5TaWjU3"><img src="https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&amp"></a>
28
- </div>
29
-
30
-
31
- ## Overview
32
- MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
33
-
34
-
35
- ## Introduction
36
-
37
- <p align="center">
38
- <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_tts_family_arch.jpeg" width="85%" />
39
- </p>
40
-
41
-
42
- When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
43
-
44
- - **MOSS‑TTS**: MOSS-TTS is the flagship production TTS foundation model, centered on high-fidelity zero-shot voice cloning with controllable long-form synthesis, pronunciation, and multilingual/code-switched speech. It serves as the core engine for scalable narration, dubbing, and voice-driven products.
45
- - **MOSS‑TTSD**: MOSS-TTSD is a production long-form dialogue model for expressive multi-speaker conversational audio at scale. It supports long-duration continuity, turn-taking control, and zero-shot voice cloning from short references for podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
46
- - **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design model that creates speaker timbres directly from free-form text, without reference audio. It unifies timbre design, style control, and content synthesis, and can be used standalone or as a voice-design layer for downstream TTS.
47
- - **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity text-to-sound model with broad category coverage and controllable duration for real content production. It generates stable audio from prompts across ambience, urban scenes, creatures, human actions, and music-like clips for film, games, interactive media, and data synthesis.
48
- - **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS model for real-time voice agents. By conditioning on dialogue history across both text and prior user acoustics, it delivers low-latency synthesis with coherent, consistent voice responses across turns.
49
-
50
-
51
- ## Released Models
52
-
53
- | Model | Architecture | Size | Model Card | Hugging Face |
54
- |---|---|---:|---|---|
55
- | **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
56
- | | MossTTSLocal | 1.7B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
57
- | **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
58
- | **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
59
- | **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
60
- | **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
61
-
62
-
63
-
64
- # MOSS-TTS
65
- **MOSS-TTS** is a next-generation, production-grade TTS foundation model focused on **voice cloning**, **ultra-long stable speech generation**, **token-level duration control**, **multilingual & code-switched synthesis**, and **fine-grained Pinyin/phoneme-level pronunciation control**. It is built on a clean autoregressive discrete-token recipe that emphasizes high-quality audio tokenization, large-scale diverse pre-training data, and efficient discrete token modeling.
66
-
67
- ## 1. Overview
68
- ### 1.1 TTS Family Positioning
69
- MOSS-TTS is the **flagship base model** in our open-source **TTS Family**. It is designed as a production-ready synthesis backbone that can serve as the primary high-quality engine for scalable voice applications, and as a strong research baseline for controllable TTS and discrete audio token modeling.
70
-
71
- **Design goals**
72
- - **Production readiness**: robust voice cloning with stable, on-brand speaker identity at scale
73
- - **Controllability**: duration and pronunciation controls that integrate into real workflows
74
- - **Long-form stability**: consistent identity and delivery for extended narration
75
- - **Multilingual coverage**: multilingual and code-switched synthesis as first-class capabilities
76
-
77
-
78
-
79
- ### 1.2 Key Capabilities
80
-
81
- MOSS-TTS delivers state-of-the-art quality while providing the fine-grained controllability and long-form stability required for production-grade voice applications, from zero-shot cloning and hour-long narration to token- and phoneme-level control across multilingual and code-switched speech.
82
-
83
- * **State-of-the-art evaluation performance** — top-tier objective and subjective results across standard TTS benchmarks and in-house human preference testing, validating both fidelity and naturalness.
84
- * **Zero-shot Voice Cloning (Voice Clone)** — clone a target speaker’s timbre (and part of speaking style) from short reference audio, without speaker-specific fine-tuning.
85
- * **Ultra-long Speech Generation (up to 1 hour)** — support continuous long-form speech generation for up to one hour in a single run, designed for extended narration and long-session content creation.
86
- * **Token-level Duration Control** control pacing, rhythm, pauses, and speaking rate at token resolution for precise alignment and expressive delivery.
87
- * **Phoneme-level Pronunciation Control** — supports:
88
-
89
- * pure **Pinyin** input
90
- * pure **IPA** phoneme input
91
- * mixed **Chinese / English / Pinyin / IPA** input in any combination
92
- * **Multilingual support** — high-quality multilingual synthesis with robust generalization across languages and accents.
93
- * **Code-switching** natural mixed-language generation within a single utterance (e.g., Chinese–English), with smooth transitions, consistent speaker identity, and pronunciation-aware rendering on both sides of the switch.
94
-
95
-
96
-
97
- ### 1.3 Model Architecture
98
-
99
- MOSS-TTS includes **two complementary architectures**, both trained and released to explore different performance/latency tradeoffs and to support downstream research.
100
-
101
- **Architecture A: Delay Pattern (MossTTSDelay)**
102
- - Single Transformer backbone with **(n_vq + 1) heads**.
103
- - Uses **delay scheduling** for multi-codebook audio tokens.
104
- - Strong long-context stability, efficient inference, and production-friendly behavior.
105
-
106
- **Architecture B: Global Latent + Local Transformer (MossTTSLocal)**
107
- - Backbone produces a **global latent** per time step.
108
- - A lightweight **Local Transformer** emits a token block per step.
109
- - **Streaming-friendly** with simpler alignment (no delay scheduling).
110
-
111
- **Why train both?**
112
- - **Exploration of architectural potential** and validation across multiple generation paradigms.
113
- - **Different tradeoffs**: Delay pattern tends to be faster and more stable for long-form synthesis; Local is smaller and excels on objective benchmarks.
114
- - **Open-source value**: two strong baselines for research, ablation, and downstream innovation.
115
-
116
- For full details, see:
117
- - **[moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md)**
118
- - **[moss_tts_local/README.md](https://github.com/OpenMOSS/MOSS-TTS/tree/main/moss_tts_local)**
119
-
120
-
121
-
122
- ### 1.4 Released Models
123
-
124
- | Model | Description |
125
- |---|---|
126
- | **MossTTSDelay-8B** | **Recommended for production**. Faster inference, stronger long-context stability, and robust voice cloning quality. Best for large-scale deployment and long-form narration. |
127
- | **MossTTSLocal-1.7B** | **Recommended for evaluation and research**. Smaller model size with SOTA objective metrics. Great for quick experiments, ablations, and academic studies. |
128
-
129
- **Recommended decoding hyperparameters (per model)**
130
-
131
- | Model | audio_temperature | audio_top_p | audio_top_k | audio_repetition_penalty |
132
- |---|---:|---:|---:|---:|
133
- | **MossTTSDelay-8B** | 1.7 | 0.8 | 25 | 1.0 |
134
- | **MossTTSLocal-1.7B** | 1.0 | 0.95 | 50 | 1.1 |
135
-
136
-
137
-
138
- ## 2. Quick Start
139
-
140
-
141
-
142
- ### Environment Setup
143
-
144
- We recommend a clean, isolated Python environment with **Transformers 5.0.0** to avoid dependency conflicts.
145
-
146
- ```bash
147
- conda create -n moss-tts python=3.12 -y
148
- conda activate moss-tts
149
- ```
150
-
151
- Install all required dependencies:
152
-
153
- ```bash
154
- git clone https://github.com/OpenMOSS/MOSS-TTS.git
155
- cd MOSS-TTS
156
- pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e .
157
- ```
158
-
159
- #### (Optional) Install FlashAttention 2
160
-
161
- For better speed and lower GPU memory usage, you can install FlashAttention 2 if your hardware supports it.
162
-
163
- ```bash
164
- pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
165
- ```
166
-
167
- If your machine has limited RAM and many CPU cores, you can cap build parallelism:
168
-
169
- ```bash
170
- MAX_JOBS=4 pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
171
- ```
172
-
173
- Notes:
174
- - Dependencies are managed in `pyproject.toml`, which currently pins `torch==2.9.1+cu128` and `torchaudio==2.9.1+cu128`.
175
- - If FlashAttention 2 fails to build on your machine, you can skip it and use the default attention backend.
176
- - FlashAttention 2 is only available on supported GPUs and is typically used with `torch.float16` or `torch.bfloat16`.
177
-
178
-
179
- ### Basic Usage
180
-
181
-
182
-
183
- > Tip: For production usage, prioritize **MossTTSDelay-8B**. The examples below use this model; **MossTTSLocal-1.7B** supports the same API, and a practical walkthrough is available in [moss_tts_local/README.md](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer).
184
-
185
- MOSS-TTS provides a convenient `generate` interface for rapid usage. The examples below cover:
186
- 1. Direct generation (Chinese / English / Pinyin / IPA)
187
- 2. Voice cloning
188
- 3. Duration control
189
-
190
- ```python
191
- from pathlib import Path
192
- import importlib.util
193
- import torch
194
- import torchaudio
195
- from transformers import AutoModel, AutoProcessor
196
- # Disable the broken cuDNN SDPA backend
197
- torch.backends.cuda.enable_cudnn_sdp(False)
198
- # Keep these enabled as fallbacks
199
- torch.backends.cuda.enable_flash_sdp(True)
200
- torch.backends.cuda.enable_mem_efficient_sdp(True)
201
- torch.backends.cuda.enable_math_sdp(True)
202
-
203
-
204
- pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
205
- device = "cuda" if torch.cuda.is_available() else "cpu"
206
- dtype = torch.bfloat16 if device == "cuda" else torch.float32
207
-
208
- def resolve_attn_implementation() -> str:
209
- # Prefer FlashAttention 2 when package + device conditions are met.
210
- if (
211
- device == "cuda"
212
- and importlib.util.find_spec("flash_attn") is not None
213
- and dtype in {torch.float16, torch.bfloat16}
214
- ):
215
- major, _ = torch.cuda.get_device_capability()
216
- if major >= 8:
217
- return "flash_attention_2"
218
-
219
- # CUDA fallback: use PyTorch SDPA kernels.
220
- if device == "cuda":
221
- return "sdpa"
222
-
223
- # CPU fallback.
224
- return "eager"
225
-
226
-
227
- attn_implementation = resolve_attn_implementation()
228
- print(f"[INFO] Using attn_implementation={attn_implementation}")
229
-
230
- processor = AutoProcessor.from_pretrained(
231
- pretrained_model_name_or_path,
232
- trust_remote_code=True,
233
- )
234
- processor.audio_tokenizer = processor.audio_tokenizer.to(device)
235
-
236
- text_1 = "亲爱的你,\n你好呀。\n\n今天,我想用最认真、最温柔的声音,对你说一些重要的话。\n这些话,像一颗小小的星星,希望能在你的心里慢慢发光。\n\n首先,我想祝你——\n每天都能平平安安、快快乐乐。\n\n希望你早上醒来的时候,\n窗外有光,屋子里很安静,\n你的心是轻轻的,没有着急,也没有害怕。\n\n希望你吃饭的时候胃口很好,\n走路的时候脚步稳稳,\n晚上睡觉的时候,能做一个又一个甜甜的梦。\n\n我希望你能一直保持好奇心。\n对世界充满问题,\n对天空、星星、花草、书本和故事感兴趣。\n当你问“为什么”的时候,\n希望总有人愿意认真地听你说话。\n\n我也希望你学会温柔。\n温柔地对待朋友,\n温柔地对待小动物,\n也温柔地对待自己。\n\n如果有一天你犯了错,\n请不要太快责怪自己,\n因为每一个认真成长的人,\n都会在路上慢慢学会更好的方法。\n\n愿你拥有勇气。\n当你站在陌生的地方时,\n当你第一次举手发言时,\n当你遇到困难、感到害怕的时候,\n希望你能轻轻地告诉自己:\n“我可以试一试。”\n\n就算没有一次成功,也没有关系。\n失败不是坏事,\n它只是告诉你,你正在努力。\n\n我希望你学会分享快乐。\n把开心的事情告诉别人,\n把笑声送给身边的人,\n因为快乐被分享的时候,\n会变得更大、更亮。\n\n如果有一天你感到难过,\n我希望你知道——\n难过并不丢脸,\n哭泣也不是软弱。\n\n愿你能找到一个安全的地方,\n慢慢把心里的话说出来,\n然后再一次抬起头,看见希望。\n\n我还希望你能拥有梦想。\n这个梦想也许很大,\n也许很小,\n也许现在还说不清楚。\n\n没关系。\n梦想会和你一起长大,\n在时间里慢慢变得清楚。\n\n最后,我想送你一个最最重要的祝福:\n\n愿你被世界温柔对待,\n也愿你成为一个温柔的人。\n\n愿你的每一天,\n都值得被记住,\n都值得被珍惜。\n\n亲爱的你,\n请记住,\n你是独一无二的,\n你已经很棒了,\n而你的未来,\n一定会慢慢变得闪闪发光。\n\n祝你健康、勇敢、幸福,\n祝你永远带着笑容向前走。"
237
- text_2 = "We stand on the threshold of the AI era.\nArtificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."
238
- text_3 = "nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?"
239
- text_4 = "nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?"
240
- text_5 = "您好,请问您来自哪 zuo4 cheng2 shi4?"
241
- text_6 = "/həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/"
242
-
243
- ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
244
- ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
245
-
246
- conversations = [
247
- # Direct TTS (no reference)
248
- [processor.build_user_message(text=text_1)],
249
- [processor.build_user_message(text=text_2)],
250
- # Pinyin or IPA input
251
- [processor.build_user_message(text=text_3)],
252
- [processor.build_user_message(text=text_4)],
253
- [processor.build_user_message(text=text_5)],
254
- [processor.build_user_message(text=text_6)],
255
- # Voice cloning (with reference)
256
- [processor.build_user_message(text=text_1, reference=[ref_audio_1])],
257
- [processor.build_user_message(text=text_2, reference=[ref_audio_2])],
258
- # Duration control
259
- [processor.build_user_message(text=text_2, tokens=325)],
260
- [processor.build_user_message(text=text_2, tokens=600)],
261
- ]
262
-
263
- model = AutoModel.from_pretrained(
264
- pretrained_model_name_or_path,
265
- trust_remote_code=True,
266
- # If FlashAttention 2 is installed, you can set attn_implementation="flash_attention_2"
267
- attn_implementation=attn_implementation,
268
- torch_dtype=dtype,
269
- ).to(device)
270
- model.eval()
271
-
272
- batch_size = 1
273
-
274
- save_dir = Path("inference_root")
275
- save_dir.mkdir(exist_ok=True, parents=True)
276
- sample_idx = 0
277
- with torch.no_grad():
278
- for start in range(0, len(conversations), batch_size):
279
- batch_conversations = conversations[start : start + batch_size]
280
- batch = processor(batch_conversations, mode="generation")
281
- input_ids = batch["input_ids"].to(device)
282
- attention_mask = batch["attention_mask"].to(device)
283
-
284
- outputs = model.generate(
285
- input_ids=input_ids,
286
- attention_mask=attention_mask,
287
- max_new_tokens=4096,
288
- )
289
-
290
- for message in processor.decode(outputs):
291
- audio = message.audio_codes_list[0]
292
- out_path = save_dir / f"sample{sample_idx}.wav"
293
- sample_idx += 1
294
- torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
295
-
296
- ```
297
-
298
- ### Continuation + Voice Cloning (Prefix Audio + Text)
299
-
300
- MOSS-TTS supports continuation-based cloning: provide a prefix audio clip in the assistant message, and make sure the **prefix transcript** is included in the text. The model continues in the same speaker identity and style.
301
-
302
- ```python
303
- from pathlib import Path
304
- import importlib.util
305
- import torch
306
- import torchaudio
307
- from transformers import AutoModel, AutoProcessor
308
- # Disable the broken cuDNN SDPA backend
309
- torch.backends.cuda.enable_cudnn_sdp(False)
310
- # Keep these enabled as fallbacks
311
- torch.backends.cuda.enable_flash_sdp(True)
312
- torch.backends.cuda.enable_mem_efficient_sdp(True)
313
- torch.backends.cuda.enable_math_sdp(True)
314
-
315
-
316
- pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
317
- device = "cuda" if torch.cuda.is_available() else "cpu"
318
- dtype = torch.bfloat16 if device == "cuda" else torch.float32
319
-
320
- def resolve_attn_implementation() -> str:
321
- # Prefer FlashAttention 2 when package + device conditions are met.
322
- if (
323
- device == "cuda"
324
- and importlib.util.find_spec("flash_attn") is not None
325
- and dtype in {torch.float16, torch.bfloat16}
326
- ):
327
- major, _ = torch.cuda.get_device_capability()
328
- if major >= 8:
329
- return "flash_attention_2"
330
-
331
- # CUDA fallback: use PyTorch SDPA kernels.
332
- if device == "cuda":
333
- return "sdpa"
334
-
335
- # CPU fallback.
336
- return "eager"
337
-
338
-
339
- attn_implementation = resolve_attn_implementation()
340
- print(f"[INFO] Using attn_implementation={attn_implementation}")
341
-
342
- processor = AutoProcessor.from_pretrained(
343
- pretrained_model_name_or_path,
344
- trust_remote_code=True
345
- )
346
- processor.audio_tokenizer = processor.audio_tokenizer.to(device)
347
-
348
- text_1 = "亲爱的你,\n你好呀。\n\n今天,我想用最认真、最温柔的声音,对你说一些重要的话。\n这些话,像一颗小小的星星,希望能在你的心里慢慢发光。\n\n首先,我想祝你——\n每天都能平平安安、快快乐乐。\n\n希望你早上醒来的时候,\n窗外有光,屋子里很安静,\n你的心是轻轻的,没有着急,也没有害怕。\n\n希望你吃饭的时候胃口很好,\n走路的时候脚步稳稳,\n晚上睡觉的时候,能做一个又一个甜甜的梦。\n\n我希望你能一直保持好奇心。\n对世界充满问题,\n对天空、星星、花草、书本和故事感兴趣。\n当你问“为什么”的时候,\n希望总有人愿意认真地听你说话。\n\n我也希望你学会温柔。\n温柔地对待朋友,\n温柔地对待小动物,\n也温柔地对待自己。\n\n如果有一天你犯了错,\n请不要太快责怪自己,\n因为每一个认真成长的人,\n都会在路上慢慢学会更好的方法。\n\n愿你拥有勇气。\n当你站在陌生的地方时,\n当你第一次举手发言时,\n当你遇到困难、感到害怕的时候,\n希望你能轻轻地告诉自己:\n“我可以试一试。”\n\n就算没有一次成功,也没有关系。\n失败不是坏事,\n它只是告诉你,你正在努力。\n\n我希望你学会分享快乐。\n把开心的事情告诉别人,\n把笑声送给身边的人,\n因为快乐被分享的时候,\n会变得更大、更亮。\n\n如果有一天你感到难过,\n我希望你知道——\n难过并不丢脸,\n哭泣也不是软弱。\n\n愿你能找到一个安全的地方,\n慢慢把心里的话说出来,\n然后再一次抬起头,看见希望。\n\n我还希望你能拥有梦想。\n这个梦想也许很大,\n也许很小,\n也许现在还说不清楚。\n\n没关系。\n梦想会和你一起长大,\n在时间里慢慢变得清楚。\n\n最后,我想送你一个最最重要的祝福:\n\n愿你被世界温柔对待,\n也愿你成为一个温柔的人。\n\n愿你的每一天,\n都值得被记住,\n都值得被珍惜。\n\n亲爱的你,\n请记住,\n你是独一无二的,\n你已经很棒了,\n而你的未来,\n一定会慢慢变得闪闪发光。\n\n祝你健康、勇敢、幸福,\n祝你永远带着笑容向前走。"
349
- text_2 = "We stand on the threshold of the AI era.\nArtificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."
350
- ref_text_1 = "太阳系八大行星之一。"
351
- ref_text_2 = "But I really can't complain about not having a normal college experience to you."
352
- ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
353
- ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
354
-
355
- conversations = [
356
- # Continuatoin only
357
- [
358
- processor.build_user_message(text=ref_text_1 + text_1),
359
- processor.build_assistant_message(audio_codes_list=[ref_audio_1])
360
- ],
361
- # Continuation with voice cloning
362
- [
363
- processor.build_user_message(text=ref_text_2 + text_2, reference=[ref_audio_2]),
364
- processor.build_assistant_message(audio_codes_list=[ref_audio_2])
365
- ],
366
- ]
367
-
368
- model = AutoModel.from_pretrained(
369
- pretrained_model_name_or_path,
370
- trust_remote_code=True,
371
- # If FlashAttention 2 is installed, you can set attn_implementation="flash_attention_2"
372
- attn_implementation=attn_implementation,
373
- torch_dtype=dtype,
374
- ).to(device)
375
- model.eval()
376
-
377
- batch_size = 1
378
-
379
- save_dir = Path("inference_root")
380
- save_dir.mkdir(exist_ok=True, parents=True)
381
- sample_idx = 0
382
- with torch.no_grad():
383
- for start in range(0, len(conversations), batch_size):
384
- batch_conversations = conversations[start : start + batch_size]
385
- batch = processor(batch_conversations, mode="continuation")
386
- input_ids = batch["input_ids"].to(device)
387
- attention_mask = batch["attention_mask"].to(device)
388
-
389
- outputs = model.generate(
390
- input_ids=input_ids,
391
- attention_mask=attention_mask,
392
- max_new_tokens=4096,
393
- )
394
-
395
- for message in processor.decode(outputs):
396
- audio = message.audio_codes_list[0]
397
- out_path = save_dir / f"sample{sample_idx}.wav"
398
- sample_idx += 1
399
- torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
400
-
401
- ```
402
-
403
-
404
-
405
- ### Input Types
406
-
407
- **UserMessage**
408
-
409
- | Field | Type | Required | Description |
410
- |---|---|---:|---|
411
- | `text` | `str` | Yes | Text to synthesize. Supports Chinese, English, German, French, Spanish, Japanese, Korean, etc. Can mix raw text with Pinyin or IPA for pronunciation control. |
412
- | `reference` | `List[str]` | No | Reference audio for voice cloning. For current MOSS-TTS, **one audio** is expected in the list. |
413
- | `tokens` | `int` | No | Expected number of audio tokens. **1s ≈ 12.5 tokens**. |
414
-
415
- **AssistantMessage**
416
-
417
- | Field | Type | Required | Description |
418
- |---|---|---:|---|
419
- | `audio_codes_list` | `List[str]` | Only for continuation | Prefix audio for continuation-based cloning. Use audio file paths or URLs. |
420
-
421
-
422
-
423
- ### Generation Hyperparameters
424
-
425
- | Parameter | Type | Default | Description |
426
- |---|---|---:|---|
427
- | `max_new_tokens` | `int` | — | Controls total generated audio tokens. Use duration rule: **1s ≈ 12.5 tokens**. |
428
- | `audio_temperature` | `float` | 1.7 | Higher values increase variation; lower values stabilize prosody. |
429
- | `audio_top_p` | `float` | 0.8 | Nucleus sampling cutoff. Lower values are more conservative. |
430
- | `audio_top_k` | `int` | 25 | Top-K sampling. Lower values tighten sampling space. |
431
- | `audio_repetition_penalty` | `float` | 1.0 | >1.0 discourages repeating patterns. |
432
-
433
- > Note: MOSS-TTS is a pretrained base model and is **sensitive to decoding hyperparameters**. See **Released Models** for recommended defaults.
434
-
435
-
436
-
437
- ### Pinyin Input
438
-
439
- Use tone-numbered Pinyin such as `ni3 hao3 wo3 men1`. You can convert Chinese text with [pypinyin](https://github.com/mozillazg/python-pinyin), then adjust tones for pronunciation control.
440
-
441
- ```python
442
- import re
443
- from pypinyin import pinyin, Style
444
-
445
- CN_PUNCT = r",。!?;:、()“”‘’"
446
-
447
-
448
- def fix_punctuation_spacing(s: str) -> str:
449
- s = re.sub(rf"\s+([{CN_PUNCT}])", r"\1", s)
450
- s = re.sub(rf"([{CN_PUNCT}])\s+", r"\1", s)
451
- return s
452
-
453
-
454
- def zh_to_pinyin_tone3(text: str, strict: bool = True) -> str:
455
- result = pinyin(
456
- text,
457
- style=Style.TONE3,
458
- heteronym=False,
459
- strict=strict,
460
- errors="default",
461
- )
462
-
463
- s = " ".join(item[0] for item in result)
464
- return fix_punctuation_spacing(s)
465
-
466
- text = zh_to_pinyin_tone3("您好,请问您来自哪座城市?")
467
- print(text)
468
-
469
- # Expected: nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?
470
- # Try: nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?
471
- ```
472
-
473
-
474
-
475
- ### IPA Input
476
-
477
- Use `/.../` to wrap IPA sequences so they are distinct from normal text. You can use [DeepPhonemizer](https://github.com/spring-media/DeepPhonemizer) to convert English paragraphs or words into IPA sequences.
478
-
479
- ```python
480
- from dp.phonemizer import Phonemizer
481
-
482
- # Download a phonemizer checkpoint from https://public-asai-dl-models.s3.eu-central-1.amazonaws.com/DeepPhonemizer/en_us_cmudict_ipa_forward.pt
483
- model_path = "<path-to-phonemizer-checkpoint>"
484
- phonemizer = Phonemizer.from_checkpoint(model_path)
485
-
486
- english_texts = "Hello, may I ask which city you are from?"
487
- phoneme_outputs = phonemizer(
488
- english_texts,
489
- lang="en_us",
490
- batch_size=8
491
- )
492
- model_input_text = f"/{phoneme_outputs}/"
493
- print(model_input_text)
494
-
495
- # Expected: /həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/
496
- ```
497
-
498
-
499
-
500
- ## 3. Evaluation
501
- MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval, not only surpassing all open-source models but also rivaling the most powerful closed-source models.
502
-
503
- | Model | Params | Open-source | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
504
- |---|---:|:---:|---:|---:|---:|---:|
505
- | DiTAR | 0.6B | ❌ | 1.69 | 73.5 | 1.02 | 75.3 |
506
- | FishAudio-S1 | 4B | ❌ | 1.72 | 62.57 | 1.22 | 72.1 |
507
- | Seed-TTS | | | 2.25 | 76.2 | 1.12 | 79.6 |
508
- | MiniMax-Speech | | ❌ | 1.65 | 69.2 | 0.83 | 78.3 |
509
- | | | | | | | |
510
- | CosyVoice | 0.3B | ✅ | 4.29 | 60.9 | 3.63 | 72.3 |
511
- | CosyVoice2 | 0.5B | ✅ | 3.09 | 65.9 | 1.38 | 75.7 |
512
- | CosyVoice3 | 0.5B | ✅ | 2.02 | 71.8 | 1.16 | 78 |
513
- | CosyVoice3 | 1.5B | ✅ | 2.22 | 72 | 1.12 | 78.1 |
514
- | F5-TTS | 0.3B | ✅ | 2 | 67 | 1.53 | 76 |
515
- | SparkTTS | 0.5B | ✅ | 3.14 | 57.3 | 1.54 | 66 |
516
- | FireRedTTS | 0.5B | | 3.82 | 46 | 1.51 | 63.5 |
517
- | FireRedTTS-2 | 1.5B | ✅ | 1.95 | 66.5 | 1.14 | 73.6 |
518
- | Qwen2.5-Omni | 7B | ✅ | 2.72 | 63.2 | 1.7 | 75.2 |
519
- | FishAudio-S1-mini | 0.5B | ✅ | 1.94 | 55 | 1.18 | 68.5 |
520
- | IndexTTS2 | 1.5B | ✅ | 2.23 | 70.6 | 1.03 | 76.5 |
521
- | VibeVoice | 1.5B | ✅ | 3.04 | 68.9 | 1.16 | 74.4 |
522
- | HiggsAudio-v2 | 3B | | 2.44 | 67.7 | 1.5 | 74 |
523
- | VoxCPM | 0.5B | ✅ | 1.85 | 72.9 | **0.93** | 77.2 |
524
- | Qwen3-TTS | 0.6B | | 1.68 | 70.39 | 1.23 | 76.4 |
525
- | Qwen3-TTS | 1.7B | ✅ | **1.5** | 71.45 | 1.33 | 76.72 |
526
- | | | | | | | |
527
- | MossTTSDelay | 8B | | 1.79 | 71.46 | 1.32 | 77.05 |
528
- | MossTTSLocal | 1.7B | | 1.85 | **73.42** | 1.2 | **78.82** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-to-speech
5
+ language:
6
+ - zh
7
+ - en
8
+ - de
9
+ - es
10
+ - fr
11
+ - ja
12
+ - it
13
+ - he
14
+ - ko
15
+ - ru
16
+ - fa
17
+ - ar
18
+ - pl
19
+ - pt
20
+ - cs
21
+ - da
22
+ - sv
23
+ - hu
24
+ - el
25
+ - tr
26
+ ---
27
+ # MOSS-TTS Family
28
+
29
+
30
+ <br>
31
+
32
+ <p align="center">
33
+ &nbsp;&nbsp;&nbsp;&nbsp;
34
+ <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/openmoss_x_mosi" height="50" align="middle" />
35
+ </p>
36
+
37
+
38
+
39
+ <div align="center">
40
+ <a href="https://github.com/OpenMOSS/MOSS-TTS/tree/main"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
41
+ <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
42
+ <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
43
+ <a href="https://github.com/OpenMOSS/MOSS-TTS"><img src="https://img.shields.io/badge/Arxiv-Coming%20soon-red?logo=arxiv&amp"></a>
44
+
45
+ <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
46
+ <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
47
+ <a href="https://x.com/Open_MOSS"><img src="https://img.shields.io/badge/Twitter-Follow-black?logo=x&amp"></a>
48
+ <a href="https://discord.gg/fvm5TaWjU3"><img src="https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&amp"></a>
49
+ </div>
50
+
51
+
52
+ ## Overview
53
+ MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
54
+
55
+
56
+ ## Introduction
57
+
58
+ <p align="center">
59
+ <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_tts_family_arch.jpeg" width="85%" />
60
+ </p>
61
+
62
+
63
+ When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
64
+
65
+ - **MOSS‑TTS**: MOSS-TTS is the flagship production TTS foundation model, centered on high-fidelity zero-shot voice cloning with controllable long-form synthesis, pronunciation, and multilingual/code-switched speech. It serves as the core engine for scalable narration, dubbing, and voice-driven products.
66
+ - **MOSS‑TTSD**: MOSS-TTSD is a production long-form dialogue model for expressive multi-speaker conversational audio at scale. It supports long-duration continuity, turn-taking control, and zero-shot voice cloning from short references for podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
67
+ - **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design model that creates speaker timbres directly from free-form text, without reference audio. It unifies timbre design, style control, and content synthesis, and can be used standalone or as a voice-design layer for downstream TTS.
68
+ - **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity text-to-sound model with broad category coverage and controllable duration for real content production. It generates stable audio from prompts across ambience, urban scenes, creatures, human actions, and music-like clips for film, games, interactive media, and data synthesis.
69
+ - **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS model for real-time voice agents. By conditioning on dialogue history across both text and prior user acoustics, it delivers low-latency synthesis with coherent, consistent voice responses across turns.
70
+
71
+
72
+ ## Released Models
73
+
74
+ | Model | Architecture | Size | Model Card | Hugging Face |
75
+ |---|---|---:|---|---|
76
+ | **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
77
+ | | MossTTSLocal | 1.7B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
78
+ | **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
79
+ | **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
80
+ | **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
81
+ | **MOSSTTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
82
+
83
+
84
+
85
+ # MOSS-TTS
86
+ **MOSS-TTS** is a next-generation, production-grade TTS foundation model focused on **voice cloning**, **ultra-long stable speech generation**, **token-level duration control**, **multilingual & code-switched synthesis**, and **fine-grained Pinyin/phoneme-level pronunciation control**. It is built on a clean autoregressive discrete-token recipe that emphasizes high-quality audio tokenization, large-scale diverse pre-training data, and efficient discrete token modeling.
87
+
88
+ ## 1. Overview
89
+ ### 1.1 TTS Family Positioning
90
+ MOSS-TTS is the **flagship base model** in our open-source **TTS Family**. It is designed as a production-ready synthesis backbone that can serve as the primary high-quality engine for scalable voice applications, and as a strong research baseline for controllable TTS and discrete audio token modeling.
91
+
92
+ **Design goals**
93
+ - **Production readiness**: robust voice cloning with stable, on-brand speaker identity at scale
94
+ - **Controllability**: duration and pronunciation controls that integrate into real workflows
95
+ - **Long-form stability**: consistent identity and delivery for extended narration
96
+ - **Multilingual coverage**: multilingual and code-switched synthesis as first-class capabilities
97
+
98
+
99
+
100
+ ### 1.2 Key Capabilities
101
+
102
+ MOSS-TTS delivers state-of-the-art quality while providing the fine-grained controllability and long-form stability required for production-grade voice applications, from zero-shot cloning and hour-long narration to token- and phoneme-level control across multilingual and code-switched speech.
103
+
104
+ * **State-of-the-art evaluation performance** — top-tier objective and subjective results across standard TTS benchmarks and in-house human preference testing, validating both fidelity and naturalness.
105
+ * **Zero-shot Voice Cloning (Voice Clone)** — clone a target speaker’s timbre (and part of speaking style) from short reference audio, without speaker-specific fine-tuning.
106
+ * **Ultra-long Speech Generation (up to 1 hour)** — support continuous long-form speech generation for up to one hour in a single run, designed for extended narration and long-session content creation.
107
+ * **Token-level Duration Control** control pacing, rhythm, pauses, and speaking rate at token resolution for precise alignment and expressive delivery.
108
+ * **Phoneme-level Pronunciation Control** supports:
109
+
110
+ * pure **Pinyin** input
111
+ * pure **IPA** phoneme input
112
+ * mixed **Chinese / English / Pinyin / IPA** input in any combination
113
+ * **Multilingual support** high-quality multilingual synthesis with robust generalization across languages and accents.
114
+ * **Code-switching** natural mixed-language generation within a single utterance (e.g., Chinese–English), with smooth transitions, consistent speaker identity, and pronunciation-aware rendering on both sides of the switch.
115
+
116
+
117
+
118
+ ### 1.3 Model Architecture
119
+
120
+ MOSS-TTS includes **two complementary architectures**, both trained and released to explore different performance/latency tradeoffs and to support downstream research.
121
+
122
+ **Architecture A: Delay Pattern (MossTTSDelay)**
123
+ - Single Transformer backbone with **(n_vq + 1) heads**.
124
+ - Uses **delay scheduling** for multi-codebook audio tokens.
125
+ - Strong long-context stability, efficient inference, and production-friendly behavior.
126
+
127
+ **Architecture B: Global Latent + Local Transformer (MossTTSLocal)**
128
+ - Backbone produces a **global latent** per time step.
129
+ - A lightweight **Local Transformer** emits a token block per step.
130
+ - **Streaming-friendly** with simpler alignment (no delay scheduling).
131
+
132
+ **Why train both?**
133
+ - **Exploration of architectural potential** and validation across multiple generation paradigms.
134
+ - **Different tradeoffs**: Delay pattern tends to be faster and more stable for long-form synthesis; Local is smaller and excels on objective benchmarks.
135
+ - **Open-source value**: two strong baselines for research, ablation, and downstream innovation.
136
+
137
+ For full details, see:
138
+ - **[moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md)**
139
+ - **[moss_tts_local/README.md](https://github.com/OpenMOSS/MOSS-TTS/tree/main/moss_tts_local)**
140
+
141
+
142
+
143
+ ### 1.4 Released Models
144
+
145
+ | Model | Description |
146
+ |---|---|
147
+ | **MossTTSDelay-8B** | **Recommended for production**. Faster inference, stronger long-context stability, and robust voice cloning quality. Best for large-scale deployment and long-form narration. |
148
+ | **MossTTSLocal-1.7B** | **Recommended for evaluation and research**. Smaller model size with SOTA objective metrics. Great for quick experiments, ablations, and academic studies. |
149
+
150
+ **Recommended decoding hyperparameters (per model)**
151
+
152
+ | Model | audio_temperature | audio_top_p | audio_top_k | audio_repetition_penalty |
153
+ |---|---:|---:|---:|---:|
154
+ | **MossTTSDelay-8B** | 1.7 | 0.8 | 25 | 1.0 |
155
+ | **MossTTSLocal-1.7B** | 1.0 | 0.95 | 50 | 1.1 |
156
+
157
+
158
+
159
+ ## 2. Quick Start
160
+
161
+
162
+
163
+ ### Environment Setup
164
+
165
+ We recommend a clean, isolated Python environment with **Transformers 5.0.0** to avoid dependency conflicts.
166
+
167
+ ```bash
168
+ conda create -n moss-tts python=3.12 -y
169
+ conda activate moss-tts
170
+ ```
171
+
172
+ Install all required dependencies:
173
+
174
+ ```bash
175
+ git clone https://github.com/OpenMOSS/MOSS-TTS.git
176
+ cd MOSS-TTS
177
+ pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e .
178
+ ```
179
+
180
+ #### (Optional) Install FlashAttention 2
181
+
182
+ For better speed and lower GPU memory usage, you can install FlashAttention 2 if your hardware supports it.
183
+
184
+ ```bash
185
+ pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
186
+ ```
187
+
188
+ If your machine has limited RAM and many CPU cores, you can cap build parallelism:
189
+
190
+ ```bash
191
+ MAX_JOBS=4 pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
192
+ ```
193
+
194
+ Notes:
195
+ - Dependencies are managed in `pyproject.toml`, which currently pins `torch==2.9.1+cu128` and `torchaudio==2.9.1+cu128`.
196
+ - If FlashAttention 2 fails to build on your machine, you can skip it and use the default attention backend.
197
+ - FlashAttention 2 is only available on supported GPUs and is typically used with `torch.float16` or `torch.bfloat16`.
198
+
199
+
200
+ ### Basic Usage
201
+
202
+
203
+
204
+ > Tip: For production usage, prioritize **MossTTSDelay-8B**. The examples below use this model; **MossTTSLocal-1.7B** supports the same API, and a practical walkthrough is available in [moss_tts_local/README.md](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer).
205
+
206
+ MOSS-TTS provides a convenient `generate` interface for rapid usage. The examples below cover:
207
+ 1. Direct generation (Chinese / English / Pinyin / IPA)
208
+ 2. Voice cloning
209
+ 3. Duration control
210
+
211
+ ```python
212
+ from pathlib import Path
213
+ import importlib.util
214
+ import torch
215
+ import torchaudio
216
+ from transformers import AutoModel, AutoProcessor
217
+ # Disable the broken cuDNN SDPA backend
218
+ torch.backends.cuda.enable_cudnn_sdp(False)
219
+ # Keep these enabled as fallbacks
220
+ torch.backends.cuda.enable_flash_sdp(True)
221
+ torch.backends.cuda.enable_mem_efficient_sdp(True)
222
+ torch.backends.cuda.enable_math_sdp(True)
223
+
224
+
225
+ pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
226
+ device = "cuda" if torch.cuda.is_available() else "cpu"
227
+ dtype = torch.bfloat16 if device == "cuda" else torch.float32
228
+
229
+ def resolve_attn_implementation() -> str:
230
+ # Prefer FlashAttention 2 when package + device conditions are met.
231
+ if (
232
+ device == "cuda"
233
+ and importlib.util.find_spec("flash_attn") is not None
234
+ and dtype in {torch.float16, torch.bfloat16}
235
+ ):
236
+ major, _ = torch.cuda.get_device_capability()
237
+ if major >= 8:
238
+ return "flash_attention_2"
239
+
240
+ # CUDA fallback: use PyTorch SDPA kernels.
241
+ if device == "cuda":
242
+ return "sdpa"
243
+
244
+ # CPU fallback.
245
+ return "eager"
246
+
247
+
248
+ attn_implementation = resolve_attn_implementation()
249
+ print(f"[INFO] Using attn_implementation={attn_implementation}")
250
+
251
+ processor = AutoProcessor.from_pretrained(
252
+ pretrained_model_name_or_path,
253
+ trust_remote_code=True,
254
+ )
255
+ processor.audio_tokenizer = processor.audio_tokenizer.to(device)
256
+
257
+ text_1 = "亲爱的你,\n你好呀。\n\n今天,我想用最认真、最温柔的声音,对你说一些重要的话。\n这些话,像一颗小小的星星,希望能在你的心里慢慢发光。\n\n首先,我想祝你——\n每天都能平平安安、快快乐乐。\n\n希望你早上醒来的时候,\n窗外有光,屋子里很安静,\n你的心是轻轻的,没有着急,也没有害怕。\n\n希望你吃饭的时候胃口很好,\n走路的时候脚步稳稳,\n晚上睡觉的时候,能做一个又一个甜甜的梦。\n\n我希望你能一直保持好奇心。\n对世界充满问题,\n对天空、星星、花草、书本和故事感兴趣。\n当你问“为什么”的时候,\n希望总有人愿意认真地听你说话。\n\n我也希望你学会温柔。\n温柔地对待朋友,\n温柔地对待小动物,\n也温柔地对待自己。\n\n如果有一天你犯了错,\n请不要太快责怪自己,\n因为每一个认真成长的人,\n都会在路上慢慢学会更好的方法。\n\n愿���拥有勇气。\n当你站在陌生的地方时,\n当你第一次举手发言时,\n当你遇到困难、感到害怕的时候,\n希望你能轻轻地告诉自己:\n“我可以试一试。”\n\n就算没有一次成功,也没有关系。\n失败不是坏事,\n它只是告诉你,你正在努力。\n\n我希望你学会分享快乐。\n把开心的事情告诉别人,\n把笑声送给身边的人,\n因为快乐被分享的时候,\n会变得更大、更亮。\n\n如果有一天你感到难过,\n我希望你知道——\n难过并不丢脸,\n哭泣也不是软弱。\n\n愿你能找到一个安全的地方,\n慢慢把心里的话说出来,\n然后再一次抬起头,看见希望。\n\n我还希望你能拥有梦想。\n这个梦想也许很大,\n也许很小,\n也许现在还说不清楚。\n\n没关系。\n梦想会和你一起长大,\n在时间里慢慢变得清楚。\n\n最后,我想送你一个最最重要的祝福:\n\n愿你被世界温柔对待,\n也愿你成为一个温柔的人。\n\n愿你的每一天,\n都值得被记住,\n都值得被珍惜。\n\n亲爱的你,\n请记住,\n你是独一无二的,\n你已经很棒了,\n而你的未来,\n一定会慢慢变得闪闪发光。\n\n祝你健康、勇敢、幸福,\n祝你永远带着笑容向前走。"
258
+ text_2 = "We stand on the threshold of the AI era.\nArtificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."
259
+ text_3 = "nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?"
260
+ text_4 = "nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?"
261
+ text_5 = "您好,请问您来自哪 zuo4 cheng2 shi4?"
262
+ text_6 = "/həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/"
263
+
264
+ ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
265
+ ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
266
+
267
+ conversations = [
268
+ # Direct TTS (no reference)
269
+ [processor.build_user_message(text=text_1)],
270
+ [processor.build_user_message(text=text_2)],
271
+ # Pinyin or IPA input
272
+ [processor.build_user_message(text=text_3)],
273
+ [processor.build_user_message(text=text_4)],
274
+ [processor.build_user_message(text=text_5)],
275
+ [processor.build_user_message(text=text_6)],
276
+ # Voice cloning (with reference)
277
+ [processor.build_user_message(text=text_1, reference=[ref_audio_1])],
278
+ [processor.build_user_message(text=text_2, reference=[ref_audio_2])],
279
+ # Duration control
280
+ [processor.build_user_message(text=text_2, tokens=325)],
281
+ [processor.build_user_message(text=text_2, tokens=600)],
282
+ ]
283
+
284
+ model = AutoModel.from_pretrained(
285
+ pretrained_model_name_or_path,
286
+ trust_remote_code=True,
287
+ # If FlashAttention 2 is installed, you can set attn_implementation="flash_attention_2"
288
+ attn_implementation=attn_implementation,
289
+ torch_dtype=dtype,
290
+ ).to(device)
291
+ model.eval()
292
+
293
+ batch_size = 1
294
+
295
+ save_dir = Path("inference_root")
296
+ save_dir.mkdir(exist_ok=True, parents=True)
297
+ sample_idx = 0
298
+ with torch.no_grad():
299
+ for start in range(0, len(conversations), batch_size):
300
+ batch_conversations = conversations[start : start + batch_size]
301
+ batch = processor(batch_conversations, mode="generation")
302
+ input_ids = batch["input_ids"].to(device)
303
+ attention_mask = batch["attention_mask"].to(device)
304
+
305
+ outputs = model.generate(
306
+ input_ids=input_ids,
307
+ attention_mask=attention_mask,
308
+ max_new_tokens=4096,
309
+ )
310
+
311
+ for message in processor.decode(outputs):
312
+ audio = message.audio_codes_list[0]
313
+ out_path = save_dir / f"sample{sample_idx}.wav"
314
+ sample_idx += 1
315
+ torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
316
+
317
+ ```
318
+
319
+ ### Continuation + Voice Cloning (Prefix Audio + Text)
320
+
321
+ MOSS-TTS supports continuation-based cloning: provide a prefix audio clip in the assistant message, and make sure the **prefix transcript** is included in the text. The model continues in the same speaker identity and style.
322
+
323
+ ```python
324
+ from pathlib import Path
325
+ import importlib.util
326
+ import torch
327
+ import torchaudio
328
+ from transformers import AutoModel, AutoProcessor
329
+ # Disable the broken cuDNN SDPA backend
330
+ torch.backends.cuda.enable_cudnn_sdp(False)
331
+ # Keep these enabled as fallbacks
332
+ torch.backends.cuda.enable_flash_sdp(True)
333
+ torch.backends.cuda.enable_mem_efficient_sdp(True)
334
+ torch.backends.cuda.enable_math_sdp(True)
335
+
336
+
337
+ pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
338
+ device = "cuda" if torch.cuda.is_available() else "cpu"
339
+ dtype = torch.bfloat16 if device == "cuda" else torch.float32
340
+
341
+ def resolve_attn_implementation() -> str:
342
+ # Prefer FlashAttention 2 when package + device conditions are met.
343
+ if (
344
+ device == "cuda"
345
+ and importlib.util.find_spec("flash_attn") is not None
346
+ and dtype in {torch.float16, torch.bfloat16}
347
+ ):
348
+ major, _ = torch.cuda.get_device_capability()
349
+ if major >= 8:
350
+ return "flash_attention_2"
351
+
352
+ # CUDA fallback: use PyTorch SDPA kernels.
353
+ if device == "cuda":
354
+ return "sdpa"
355
+
356
+ # CPU fallback.
357
+ return "eager"
358
+
359
+
360
+ attn_implementation = resolve_attn_implementation()
361
+ print(f"[INFO] Using attn_implementation={attn_implementation}")
362
+
363
+ processor = AutoProcessor.from_pretrained(
364
+ pretrained_model_name_or_path,
365
+ trust_remote_code=True
366
+ )
367
+ processor.audio_tokenizer = processor.audio_tokenizer.to(device)
368
+
369
+ text_1 = "亲爱的你,\n你好呀。\n\n今天,我想用最认真、最温柔的声音,对你说一些重要的话。\n这些话,像一颗小小的星星,希望能在你的心里慢慢发光。\n\n首先,我想祝你——\n每天都能平平安安、快快乐乐。\n\n希望你早上醒来的时候,\n窗外有光,屋子里很安静,\n你的心是轻轻的,没有着急,也没有害怕。\n\n希望你吃饭的时候胃口很好,\n走路的时候脚步稳稳,\n晚上睡觉的时候,能做一个又一个甜甜的梦。\n\n我希望你能一直保持好奇心。\n对世界充满问题,\n对天空、星星、花草、书本和故事感兴趣。\n当你问“为什么”的时候,\n希望总有人愿意认真地听你说话。\n\n我也希望你学会温柔。\n温柔地对待朋友,\n温柔地对待小动物,\n也温柔地对待自己。\n\n如果有一天你犯了错,\n请不要太快责怪自己,\n因为每一个认真成长的人,\n都会在路上慢慢学会更好的方法。\n\n愿你拥有勇气。\n当你站在陌生的地方时,\n当你第一次举手发言时,\n当你遇到困难、感到害怕的时候,\n希望你能轻轻地告诉自己:\n“我可以试一试。”\n\n就算没有一次成功,也没有关系。\n失败不是坏事,\n它只是告诉你,你正在努力。\n\n我希望你学会分享快乐。\n把开心的事情告诉别人,\n把笑声送给身边的人,\n因为快乐被分享的时候,\n会变得更大、更亮。\n\n如果有一天你感到难过,\n我希望你知道——\n难过并不丢脸,\n哭泣也不是软弱。\n\n愿你能找到一个安全的地方,\n慢慢把心里的话说出来,\n然后再一次抬起头,看见希望。\n\n我还希望你能拥有梦想。\n这个梦想也许很大,\n也许很小,\n也许现在还说不清楚。\n\n没关系。\n梦想会和你一起长大,\n在时间里慢慢变得清楚。\n\n最后,我想送你一个最最重要的祝福:\n\n愿你被世界温柔对待,\n也愿你成为一个温柔的人。\n\n愿你的每一天,\n都值得被记住,\n都值得被珍惜。\n\n亲爱的你,\n请记住,\n你是独一无二的,\n你已经很棒了,\n而你的未来,\n一定会慢慢变得闪闪发光。\n\n祝你健康、勇敢、幸福,\n祝你永远带着笑容向前走。"
370
+ text_2 = "We stand on the threshold of the AI era.\nArtificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."
371
+ ref_text_1 = "太阳系八大行星之一。"
372
+ ref_text_2 = "But I really can't complain about not having a normal college experience to you."
373
+ ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
374
+ ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
375
+
376
+ conversations = [
377
+ # Continuatoin only
378
+ [
379
+ processor.build_user_message(text=ref_text_1 + text_1),
380
+ processor.build_assistant_message(audio_codes_list=[ref_audio_1])
381
+ ],
382
+ # Continuation with voice cloning
383
+ [
384
+ processor.build_user_message(text=ref_text_2 + text_2, reference=[ref_audio_2]),
385
+ processor.build_assistant_message(audio_codes_list=[ref_audio_2])
386
+ ],
387
+ ]
388
+
389
+ model = AutoModel.from_pretrained(
390
+ pretrained_model_name_or_path,
391
+ trust_remote_code=True,
392
+ # If FlashAttention 2 is installed, you can set attn_implementation="flash_attention_2"
393
+ attn_implementation=attn_implementation,
394
+ torch_dtype=dtype,
395
+ ).to(device)
396
+ model.eval()
397
+
398
+ batch_size = 1
399
+
400
+ save_dir = Path("inference_root")
401
+ save_dir.mkdir(exist_ok=True, parents=True)
402
+ sample_idx = 0
403
+ with torch.no_grad():
404
+ for start in range(0, len(conversations), batch_size):
405
+ batch_conversations = conversations[start : start + batch_size]
406
+ batch = processor(batch_conversations, mode="continuation")
407
+ input_ids = batch["input_ids"].to(device)
408
+ attention_mask = batch["attention_mask"].to(device)
409
+
410
+ outputs = model.generate(
411
+ input_ids=input_ids,
412
+ attention_mask=attention_mask,
413
+ max_new_tokens=4096,
414
+ )
415
+
416
+ for message in processor.decode(outputs):
417
+ audio = message.audio_codes_list[0]
418
+ out_path = save_dir / f"sample{sample_idx}.wav"
419
+ sample_idx += 1
420
+ torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
421
+
422
+ ```
423
+
424
+
425
+
426
+ ### Input Types
427
+
428
+ **UserMessage**
429
+
430
+ | Field | Type | Required | Description |
431
+ |---|---|---:|---|
432
+ | `text` | `str` | Yes | Text to synthesize. Supports Chinese, English, German, French, Spanish, Japanese, Korean, etc. Can mix raw text with Pinyin or IPA for pronunciation control. |
433
+ | `reference` | `List[str]` | No | Reference audio for voice cloning. For current MOSS-TTS, **one audio** is expected in the list. |
434
+ | `tokens` | `int` | No | Expected number of audio tokens. **1s ≈ 12.5 tokens**. |
435
+
436
+ **AssistantMessage**
437
+
438
+ | Field | Type | Required | Description |
439
+ |---|---|---:|---|
440
+ | `audio_codes_list` | `List[str]` | Only for continuation | Prefix audio for continuation-based cloning. Use audio file paths or URLs. |
441
+
442
+
443
+
444
+ ### Generation Hyperparameters
445
+
446
+ | Parameter | Type | Default | Description |
447
+ |---|---|---:|---|
448
+ | `max_new_tokens` | `int` | — | Controls total generated audio tokens. Use duration rule: **1s 12.5 tokens**. |
449
+ | `audio_temperature` | `float` | 1.7 | Higher values increase variation; lower values stabilize prosody. |
450
+ | `audio_top_p` | `float` | 0.8 | Nucleus sampling cutoff. Lower values are more conservative. |
451
+ | `audio_top_k` | `int` | 25 | Top-K sampling. Lower values tighten sampling space. |
452
+ | `audio_repetition_penalty` | `float` | 1.0 | >1.0 discourages repeating patterns. |
453
+
454
+ > Note: MOSS-TTS is a pretrained base model and is **sensitive to decoding hyperparameters**. See **Released Models** for recommended defaults.
455
+
456
+
457
+
458
+ ### Pinyin Input
459
+
460
+ Use tone-numbered Pinyin such as `ni3 hao3 wo3 men1`. You can convert Chinese text with [pypinyin](https://github.com/mozillazg/python-pinyin), then adjust tones for pronunciation control.
461
+
462
+ ```python
463
+ import re
464
+ from pypinyin import pinyin, Style
465
+
466
+ CN_PUNCT = r",。!?;:、()“”‘’"
467
+
468
+
469
+ def fix_punctuation_spacing(s: str) -> str:
470
+ s = re.sub(rf"\s+([{CN_PUNCT}])", r"\1", s)
471
+ s = re.sub(rf"([{CN_PUNCT}])\s+", r"\1", s)
472
+ return s
473
+
474
+
475
+ def zh_to_pinyin_tone3(text: str, strict: bool = True) -> str:
476
+ result = pinyin(
477
+ text,
478
+ style=Style.TONE3,
479
+ heteronym=False,
480
+ strict=strict,
481
+ errors="default",
482
+ )
483
+
484
+ s = " ".join(item[0] for item in result)
485
+ return fix_punctuation_spacing(s)
486
+
487
+ text = zh_to_pinyin_tone3("您好,请问您来自哪座城市?")
488
+ print(text)
489
+
490
+ # Expected: nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?
491
+ # Try: nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?
492
+ ```
493
+
494
+
495
+
496
+ ### IPA Input
497
+
498
+ Use `/.../` to wrap IPA sequences so they are distinct from normal text. You can use [DeepPhonemizer](https://github.com/spring-media/DeepPhonemizer) to convert English paragraphs or words into IPA sequences.
499
+
500
+ ```python
501
+ from dp.phonemizer import Phonemizer
502
+
503
+ # Download a phonemizer checkpoint from https://public-asai-dl-models.s3.eu-central-1.amazonaws.com/DeepPhonemizer/en_us_cmudict_ipa_forward.pt
504
+ model_path = "<path-to-phonemizer-checkpoint>"
505
+ phonemizer = Phonemizer.from_checkpoint(model_path)
506
+
507
+ english_texts = "Hello, may I ask which city you are from?"
508
+ phoneme_outputs = phonemizer(
509
+ english_texts,
510
+ lang="en_us",
511
+ batch_size=8
512
+ )
513
+ model_input_text = f"/{phoneme_outputs}/"
514
+ print(model_input_text)
515
+
516
+ # Expected: /həloʊ, meɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/
517
+ ```
518
+
519
+
520
+
521
+ ## 3. Evaluation
522
+ MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval, not only surpassing all open-source models but also rivaling the most powerful closed-source models.
523
+
524
+ | Model | Params | Open-source | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
525
+ |---|---:|:---:|---:|---:|---:|---:|
526
+ | DiTAR | 0.6B || 1.69 | 73.5 | 1.02 | 75.3 |
527
+ | FishAudio-S1 | 4B | | 1.72 | 62.57 | 1.22 | 72.1 |
528
+ | Seed-TTS | | | 2.25 | 76.2 | 1.12 | 79.6 |
529
+ | MiniMax-Speech | | ❌ | 1.65 | 69.2 | 0.83 | 78.3 |
530
+ | | | | | | | |
531
+ | CosyVoice | 0.3B | ✅ | 4.29 | 60.9 | 3.63 | 72.3 |
532
+ | CosyVoice2 | 0.5B | ✅ | 3.09 | 65.9 | 1.38 | 75.7 |
533
+ | CosyVoice3 | 0.5B | ✅ | 2.02 | 71.8 | 1.16 | 78 |
534
+ | CosyVoice3 | 1.5B | ✅ | 2.22 | 72 | 1.12 | 78.1 |
535
+ | F5-TTS | 0.3B | ✅ | 2 | 67 | 1.53 | 76 |
536
+ | SparkTTS | 0.5B | ✅ | 3.14 | 57.3 | 1.54 | 66 |
537
+ | FireRedTTS | 0.5B | ✅ | 3.82 | 46 | 1.51 | 63.5 |
538
+ | FireRedTTS-2 | 1.5B | ✅ | 1.95 | 66.5 | 1.14 | 73.6 |
539
+ | Qwen2.5-Omni | 7B | ✅ | 2.72 | 63.2 | 1.7 | 75.2 |
540
+ | FishAudio-S1-mini | 0.5B | ✅ | 1.94 | 55 | 1.18 | 68.5 |
541
+ | IndexTTS2 | 1.5B | ✅ | 2.23 | 70.6 | 1.03 | 76.5 |
542
+ | VibeVoice | 1.5B | ✅ | 3.04 | 68.9 | 1.16 | 74.4 |
543
+ | HiggsAudio-v2 | 3B | ✅ | 2.44 | 67.7 | 1.5 | 74 |
544
+ | VoxCPM | 0.5B | ✅ | 1.85 | 72.9 | **0.93** | 77.2 |
545
+ | Qwen3-TTS | 0.6B | ✅ | 1.68 | 70.39 | 1.23 | 76.4 |
546
+ | Qwen3-TTS | 1.7B | ✅ | **1.5** | 71.45 | 1.33 | 76.72 |
547
+ | | | | | | | |
548
+ | MossTTSDelay | 8B | ✅ | 1.79 | 71.46 | 1.32 | 77.05 |
549
+ | MossTTSLocal | 1.7B | ✅ | 1.85 | **73.42** | 1.2 | **78.82** |