fdugyt commited on
Commit
d9e8fba
·
verified ·
1 Parent(s): 189e168

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +531 -3
README.md CHANGED
@@ -1,3 +1,531 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # MOSS-TTS Family
5
+
6
+ ## Overview
7
+ MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
8
+
9
+
10
+ ## Introduction
11
+
12
+ <p align="center">
13
+ <img src="./assets/moss_tts_family.jpeg" width="85%" />
14
+ </p>
15
+
16
+ When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
17
+
18
+ - **MOSS‑TTS**: MOSS-TTS is the flagship, production-ready Text-to-Speech foundation model in the MOSS-TTS Family, built to ship, scale, and deliver real-world voice applications beyond demos. It provides high-fidelity zero-shot voice cloning as the core capability, along with ultra-long speech generation, token-level duration control, multilingual and code-switched synthesis, and fine-grained Pinyin/phoneme pronunciation control. Together, these features make it a robust base model for scalable narration, dubbing, and voice-driven products.
19
+ - **MOSS‑TTSD**: MOSS-TTSD is a production-oriented long-form spoken dialogue generation model for creating highly expressive, multi-party conversational audio at scale. It supports continuous long-duration generation, flexible multi-speaker turn-taking control, and zero-shot voice cloning from short reference audio, enabling natural conversations with rich interaction dynamics. It is designed for real-world long-form content such as podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
20
+ - **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design system that generates speaker timbres directly from free-form text descriptions, enabling fast creation of voices for characters, personalities, and emotions—without requiring reference audio. It unifies timbre design, style control, and content synthesis in a single instruction-driven model, producing high-fidelity, emotionally expressive speech that feels naturally human. It can be used standalone for creative production, or as a voice design layer that improves integration and usability for downstream TTS systems.
21
+ - **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity sound effect generation model built for real-world content creation, offering strong environmental richness, broad category coverage, and reliable duration controllability. Trained on large-scale, high-quality data, it generates consistent audio from text prompts across natural ambience, urban scenes, creatures, human actions, and music-like clips. It is well suited for film and game production, interactive experiences, and data synthesis pipelines.
22
+ - **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS foundation model designed for real-time voice agents. Unlike conventional TTS that synthesizes replies in isolation, it conditions generation on multi-turn dialogue history—including both textual and acoustic signals from prior user speech—so responses stay coherent, consistent, and natural across turns. With low-latency incremental synthesis and strong voice stability, it enables truly conversational, human-like real-time speech experiences.
23
+
24
+
25
+ ## Released Models
26
+
27
+ | Model | Architecture | Size | Model Card | Hugging Face |
28
+ |---|---|---:|---|---|
29
+ | **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
30
+ | | MossTTSLocal | 1.7B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
31
+ | **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
32
+ | **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
33
+ | **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
34
+ | **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
35
+
36
+
37
+
38
+ # MOSS-TTS
39
+ ## 1. Overview
40
+ ### 1.1 TTS Family Positioning
41
+ MOSS-TTS is the **flagship base model** in our open-source **TTS Family**. It is designed as a production-ready synthesis backbone that can serve as the primary high-quality engine for scalable voice applications, and as a strong research baseline for controllable TTS and discrete audio token modeling.
42
+
43
+ **Design goals**
44
+ - **Production readiness**: robust voice cloning with stable, on-brand speaker identity at scale
45
+ - **Controllability**: duration and pronunciation controls that integrate into real workflows
46
+ - **Long-form stability**: consistent identity and delivery for extended narration
47
+ - **Multilingual coverage**: multilingual and code-switched synthesis as first-class capabilities
48
+
49
+
50
+
51
+ ### 1.2 Key Capabilities
52
+
53
+ MOSS-TTS delivers state-of-the-art quality while providing the fine-grained controllability and long-form stability required for production-grade voice applications, from zero-shot cloning and hour-long narration to token- and phoneme-level control across multilingual and code-switched speech.
54
+
55
+ * **State-of-the-art evaluation performance** — top-tier objective and subjective results across standard TTS benchmarks and in-house human preference testing, validating both fidelity and naturalness.
56
+ * **Zero-shot Voice Cloning (Voice Clone)** — clone a target speaker’s timbre (and part of speaking style) from short reference audio, without speaker-specific fine-tuning.
57
+ * **Ultra-long Speech Generation (up to 1 hour)** — support continuous long-form speech generation for up to one hour in a single run, designed for extended narration and long-session content creation.
58
+ * **Token-level Duration Control** — control pacing, rhythm, pauses, and speaking rate at token resolution for precise alignment and expressive delivery.
59
+ * **Phoneme-level Pronunciation Control** — supports:
60
+
61
+ * pure **Pinyin** input
62
+ * pure **IPA** phoneme input
63
+ * mixed **Chinese / English / Pinyin / IPA** input in any combination
64
+ * **Multilingual support** — high-quality multilingual synthesis with robust generalization across languages and accents.
65
+ * **Code-switching** — natural mixed-language generation within a single utterance (e.g., Chinese–English), with smooth transitions, consistent speaker identity, and pronunciation-aware rendering on both sides of the switch.
66
+
67
+
68
+
69
+ ### 1.3 Model Architecture
70
+
71
+ MOSS-TTS includes **two complementary architectures**, both trained and released to explore different performance/latency tradeoffs and to support downstream research.
72
+
73
+ **Architecture A: Delay Pattern (MossTTSDelay)**
74
+ - Single Transformer backbone with **(n_vq + 1) heads**.
75
+ - Uses **delay scheduling** for multi-codebook audio tokens.
76
+ - Strong long-context stability, efficient inference, and production-friendly behavior.
77
+
78
+ **Architecture B: Global Latent + Local Transformer (MossTTSLocal)**
79
+ - Backbone produces a **global latent** per time step.
80
+ - A lightweight **Local Transformer** emits a token block per step.
81
+ - **Streaming-friendly** with simpler alignment (no delay scheduling).
82
+
83
+ **Why train both?**
84
+ - **Exploration of architectural potential** and validation across multiple generation paradigms.
85
+ - **Different tradeoffs**: Delay pattern tends to be faster and more stable for long-form synthesis; Local is smaller and excels on objective benchmarks.
86
+ - **Open-source value**: two strong baselines for research, ablation, and downstream innovation.
87
+
88
+ For full details, see:
89
+ - **[moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md)**
90
+ - **[moss_tts_local/README.md](https://github.com/OpenMOSS/MOSS-TTS/tree/main/moss_tts_local)**
91
+
92
+
93
+
94
+ ### 1.4 Released Models
95
+
96
+ | Model | Description |
97
+ |---|---|
98
+ | **MossTTSDelay-8B** | **Recommended for production**. Faster inference, stronger long-context stability, and robust voice cloning quality. Best for large-scale deployment and long-form narration. |
99
+ | **MossTTSLocal-1.7B** | **Recommended for evaluation and research**. Smaller model size with SOTA objective metrics. Great for quick experiments, ablations, and academic studies. |
100
+
101
+ **Recommended decoding hyperparameters (per model)**
102
+
103
+ | Model | audio_temperature | audio_top_p | audio_top_k | audio_repetition_penalty |
104
+ |---|---:|---:|---:|---:|
105
+ | **MOSS-TTSDelay-8B** | 1.7 | 0.8 | 25 | 1.0 |
106
+ | **MOSS-TTSLocal-1.7B** | 1.0 | 0.95 | 50 | 1.1 |
107
+
108
+ > Note: `max_new_tokens` controls duration. At 12.5 tokens per second, **1s ≈ 12.5 tokens**.
109
+
110
+
111
+
112
+ ## 2. Quick Start
113
+
114
+ > Tip: For evaluation and research purposes, we recommend using **MOSS-TTSLocal-1.7B**.
115
+
116
+ MOSS-TTS provides a convenient `generate` interface for rapid usage. The examples below cover:
117
+ 1. Direct generation (Chinese / English / Pinyin / IPA)
118
+ 2. Voice cloning
119
+ 3. Duration control
120
+
121
+ ```python
122
+ import os
123
+ from pathlib import Path
124
+ import torch
125
+ import torchaudio
126
+ from transformers import AutoModel, AutoProcessor, GenerationConfig
127
+ # Disable the broken cuDNN SDPA backend
128
+ torch.backends.cuda.enable_cudnn_sdp(False)
129
+ # Keep these enabled as fallbacks
130
+ torch.backends.cuda.enable_flash_sdp(True)
131
+ torch.backends.cuda.enable_mem_efficient_sdp(True)
132
+ torch.backends.cuda.enable_math_sdp(True)
133
+
134
+ class DelayGenerationConfig(GenerationConfig):
135
+ def __init__(self, **kwargs):
136
+ super().__init__(**kwargs)
137
+ self.layers = kwargs.get("layers", [{} for _ in range(32)])
138
+ self.do_samples = kwargs.get("do_samples", None)
139
+ self.n_vq_for_inference = 32
140
+
141
+ def initial_config(tokenizer, model_name_or_path):
142
+ generation_config = DelayGenerationConfig.from_pretrained(model_name_or_path)
143
+ generation_config.pad_token_id = tokenizer.pad_token_id
144
+ generation_config.eos_token_id = 151653
145
+ generation_config.max_new_tokens = 1000000
146
+ generation_config.temperature = 1.0
147
+ generation_config.top_p = 0.95
148
+ generation_config.top_k = 100
149
+ generation_config.repetition_penalty = 1.1
150
+ generation_config.use_cache = True
151
+ generation_config.do_sample = False
152
+ return generation_config
153
+
154
+
155
+
156
+
157
+ pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
158
+ device = "cuda" if torch.cuda.is_available() else "cpu"
159
+ dtype = torch.bfloat16 if device == "cuda" else torch.float32
160
+
161
+ processor = AutoProcessor.from_pretrained(
162
+ pretrained_model_name_or_path,
163
+ trust_remote_code=True,
164
+ )
165
+ processor.audio_tokenizer = processor.audio_tokenizer.to(device)
166
+
167
+ text_1 = """亲爱的你,
168
+ 你好呀。
169
+
170
+ 今天,我想用最认真、最温柔的声音,对你说一些重要的话。
171
+ 这些话,像一颗小小的星星,希望能在你的心里慢慢发光。
172
+
173
+ 首先,我想祝你——
174
+ 每天都能平平安安、快快乐乐。
175
+
176
+ 希望你早上醒来的时候,
177
+ 窗外有光,屋子里很安静,
178
+ 你的心是轻轻的,没有着急,也没有害怕。
179
+ """
180
+ text_2 = """We stand on the threshold of the AI era.
181
+ Artificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."""
182
+ text_3 = "nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?"
183
+ text_4 = "nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?"
184
+ text_5 = "您好,请问您来自哪 zuo4 cheng2 shi4?"
185
+ text_6 = "/həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/"
186
+
187
+ ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
188
+ ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
189
+
190
+ conversations = [
191
+ # Direct TTS (no reference)
192
+ [
193
+ processor.build_user_message(text=text_1)
194
+ ],
195
+ [
196
+ processor.build_user_message(text=text_2)
197
+ ],
198
+ # Pinyin or IPA input
199
+ [
200
+ processor.build_user_message(text=text_3)
201
+ ],
202
+ [
203
+ processor.build_user_message(text=text_4)
204
+ ],
205
+ [
206
+ processor.build_user_message(text=text_5)
207
+ ],
208
+ [
209
+ processor.build_user_message(text=text_6)
210
+ ],
211
+ # Voice cloning (with reference)
212
+ [
213
+ processor.build_user_message(text=text_1, reference=[ref_audio_1])
214
+ ],
215
+ [
216
+ processor.build_user_message(text=text_2, reference=[ref_audio_2])
217
+ ],
218
+ ]
219
+
220
+
221
+
222
+ model = AutoModel.from_pretrained(
223
+ pretrained_model_name_or_path,
224
+ trust_remote_code=True,
225
+ attn_implementation="sdpa",
226
+ torch_dtype=dtype,
227
+ ).to(device)
228
+ model.eval()
229
+
230
+ generation_config = initial_config(processor.tokenizer, pretrained_model_name_or_path)
231
+ generation_config.n_vq_for_inference = model.channels - 1
232
+ generation_config.do_samples = [True] * model.channels
233
+ generation_config.layers = [
234
+ {
235
+ "repetition_penalty": 1.0,
236
+ "temperature": 1.5,
237
+ "top_p": 1.0,
238
+ "top_k": 50
239
+ }
240
+ ] + [
241
+ {
242
+ "repetition_penalty": 1.1,
243
+ "temperature": 1.0,
244
+ "top_p": 0.95,
245
+ "top_k": 50
246
+ }
247
+ ] * (model.channels - 1)
248
+
249
+ batch_size = 1
250
+
251
+ messages = []
252
+ save_dir = Path(f"inference_root_moss_tts_local_transformer_generation")
253
+ save_dir.mkdir(exist_ok=True, parents=True)
254
+ sample_idx = 0
255
+ with torch.no_grad():
256
+ for start in range(0, len(conversations), batch_size):
257
+ batch_conversations = conversations[start : start + batch_size]
258
+ batch = processor(batch_conversations, mode="generation")
259
+ input_ids = batch["input_ids"].to(device)
260
+ attention_mask = batch["attention_mask"].to(device)
261
+
262
+ outputs = model.generate(
263
+ input_ids=input_ids,
264
+ attention_mask=attention_mask,
265
+ generation_config=generation_config
266
+ )
267
+
268
+ for message in processor.decode(outputs):
269
+ for seg_idx, audio in enumerate(message.audio_codes_list):
270
+ # audio is a waveform tensor after decode_audio_codes
271
+ out_path = save_dir / f"sample{sample_idx}_seg{seg_idx}.wav"
272
+ sample_idx += 1
273
+ torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
274
+
275
+ ```
276
+
277
+ ### Continuation + Voice Cloning (Prefix Audio + Text)
278
+
279
+ MOSS-TTS supports continuation-based cloning: provide a prefix audio clip in the assistant message, and make sure the **prefix transcript** is included in the text. The model continues in the same speaker identity and style.
280
+
281
+ ```python
282
+ import os
283
+ from pathlib import Path
284
+ import torch
285
+ import torchaudio
286
+ from transformers import AutoModel, AutoProcessor, GenerationConfig
287
+ # Disable the broken cuDNN SDPA backend
288
+ torch.backends.cuda.enable_cudnn_sdp(False)
289
+ # Keep these enabled as fallbacks
290
+ torch.backends.cuda.enable_flash_sdp(True)
291
+ torch.backends.cuda.enable_mem_efficient_sdp(True)
292
+ torch.backends.cuda.enable_math_sdp(True)
293
+
294
+ class DelayGenerationConfig(GenerationConfig):
295
+ def __init__(self, **kwargs):
296
+ super().__init__(**kwargs)
297
+ self.layers = kwargs.get("layers", [{} for _ in range(32)])
298
+ self.do_samples = kwargs.get("do_samples", None)
299
+ self.n_vq_for_inference = 32
300
+
301
+ def initial_config(tokenizer, model_name_or_path):
302
+ generation_config = DelayGenerationConfig.from_pretrained(model_name_or_path)
303
+ generation_config.pad_token_id = tokenizer.pad_token_id
304
+ generation_config.eos_token_id = 151653
305
+ generation_config.max_new_tokens = 1000000
306
+ generation_config.temperature = 1.0
307
+ generation_config.top_p = 0.95
308
+ generation_config.top_k = 100
309
+ generation_config.repetition_penalty = 1.1
310
+ generation_config.use_cache = True
311
+ generation_config.do_sample = False
312
+ return generation_config
313
+
314
+
315
+ pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
316
+ device = "cuda" if torch.cuda.is_available() else "cpu"
317
+ dtype = torch.bfloat16 if device == "cuda" else torch.float32
318
+
319
+ processor = AutoProcessor.from_pretrained(
320
+ pretrained_model_name_or_path,
321
+ trust_remote_code=True,
322
+ )
323
+ processor.audio_tokenizer = processor.audio_tokenizer.to(device)
324
+
325
+ text_1 = """亲爱的你,
326
+ 你好呀。
327
+
328
+ 今天,我想用最认真、最温柔的声音,对你说一些重要的话。
329
+ 这些话,像一颗小小的星星,希望能在你的心里慢慢发光。
330
+
331
+ 首先,我想祝你——
332
+ 每天都能平平安安、快快乐乐。
333
+
334
+ 希望你早上醒来的时候,
335
+ 窗外有光,屋子里很安静,
336
+ 你的心是轻轻的,没有着急,也没有害怕。
337
+ """
338
+
339
+ ref_text_1 = "太阳系八大行星之一。"
340
+ ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
341
+
342
+ conversations = [
343
+ # Continuatoin only
344
+ [
345
+ processor.build_user_message(text=ref_text_1 + text_1),
346
+ processor.build_assistant_message(audio_codes_list=[ref_audio_1])
347
+ ],
348
+ ]
349
+
350
+ model = AutoModel.from_pretrained(
351
+ pretrained_model_name_or_path,
352
+ trust_remote_code=True,
353
+ attn_implementation="sdpa",
354
+ torch_dtype=dtype,
355
+ ).to(device)
356
+ model.eval()
357
+
358
+ generation_config = initial_config(processor.tokenizer, pretrained_model_name_or_path)
359
+ generation_config.n_vq_for_inference = model.channels - 1
360
+ generation_config.do_samples = [True] * model.channels
361
+ generation_config.layers = [
362
+ {
363
+ "repetition_penalty": 1.0,
364
+ "temperature": 1.5,
365
+ "top_p": 1.0,
366
+ "top_k": 50
367
+ }
368
+ ] + [
369
+ {
370
+ "repetition_penalty": 1.1,
371
+ "temperature": 1.0,
372
+ "top_p": 0.95,
373
+ "top_k": 50
374
+ }
375
+ ] * (model.channels - 1)
376
+
377
+
378
+ batch_size = 1
379
+
380
+ messages = []
381
+ save_dir = Path("inference_root_moss_tts_local_transformer_continuation")
382
+ save_dir.mkdir(exist_ok=True, parents=True)
383
+ sample_idx = 0
384
+ with torch.no_grad():
385
+ for start in range(0, len(conversations), batch_size):
386
+ batch_conversations = conversations[start : start + batch_size]
387
+ batch = processor(batch_conversations, mode="continuation")
388
+ input_ids = batch["input_ids"].to(device)
389
+ attention_mask = batch["attention_mask"].to(device)
390
+
391
+ outputs = model.generate(
392
+ input_ids=input_ids,
393
+ attention_mask=attention_mask,
394
+ generation_config=generation_config
395
+ )
396
+
397
+ for message in processor.decode(outputs):
398
+ for seg_idx, audio in enumerate(message.audio_codes_list):
399
+ # audio is a waveform tensor after decode_audio_codes
400
+ out_path = save_dir / f"sample{sample_idx}_seg{seg_idx}.wav"
401
+ sample_idx += 1
402
+ torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
403
+
404
+ ```
405
+
406
+
407
+
408
+ ### Input Types
409
+
410
+ **UserMessage**
411
+
412
+ | Field | Type | Required | Description |
413
+ |---|---|---:|---|
414
+ | `text` | `str` | Yes | Text to synthesize. Supports Chinese, English, German, French, Spanish, Japanese, Korean, etc. Can mix raw text with Pinyin or IPA for pronunciation control. |
415
+ | `reference` | `List[str]` | No | Reference audio for voice cloning. For current MOSS-TTS, **one audio** is expected in the list. |
416
+ | `tokens` | `int` | No | Expected number of audio tokens. **1s ≈ 12.5 tokens**. |
417
+
418
+ **AssistantMessage**
419
+
420
+ | Field | Type | Required | Description |
421
+ |---|---|---:|---|
422
+ | `audio_codes_list` | `List[str]` | Only for continuation | Prefix audio for continuation-based cloning. Use audio file paths or URLs. |
423
+
424
+
425
+
426
+ ### Generation Hyperparameters
427
+
428
+ | Parameter | Type | Default | Description |
429
+ |---|---|---:|---|
430
+ | `max_new_tokens` | `int` | — | Controls total generated audio tokens. Use duration rule: **1s ≈ 12.5 tokens**. |
431
+ | `audio_temperature` | `float` | 1.7 | Higher values increase variation; lower values stabilize prosody. |
432
+ | `audio_top_p` | `float` | 0.8 | Nucleus sampling cutoff. Lower values are more conservative. |
433
+ | `audio_top_k` | `int` | 25 | Top-K sampling. Lower values tighten sampling space. |
434
+ | `audio_repetition_penalty` | `float` | 1.0 | >1.0 discourages repeating patterns. |
435
+
436
+ > Note: MOSS-TTS is a pretrained base model and is **sensitive to decoding hyperparameters**. See **Released Models** for recommended defaults.
437
+
438
+
439
+
440
+ ### Pinyin Input
441
+
442
+ Use tone-numbered Pinyin such as `ni3 hao3 wo3 men1`. You can convert Chinese text with [pypinyin](https://github.com/mozillazg/python-pinyin), then adjust tones for pronunciation control.
443
+
444
+ ```python
445
+ import re
446
+ from pypinyin import pinyin, Style
447
+
448
+ CN_PUNCT = r",。!?;:、()“”‘’"
449
+
450
+
451
+ def fix_punctuation_spacing(s: str) -> str:
452
+ s = re.sub(rf"\s+([{CN_PUNCT}])", r"\1", s)
453
+ s = re.sub(rf"([{CN_PUNCT}])\s+", r"\1", s)
454
+ return s
455
+
456
+
457
+ def zh_to_pinyin_tone3(text: str, strict: bool = True) -> str:
458
+ result = pinyin(
459
+ text,
460
+ style=Style.TONE3,
461
+ heteronym=False,
462
+ strict=strict,
463
+ errors="default",
464
+ )
465
+
466
+ s = " ".join(item[0] for item in result)
467
+ return fix_punctuation_spacing(s)
468
+
469
+ text = zh_to_pinyin_tone3("您好,请问您来自哪座城市?")
470
+ print(text)
471
+
472
+ # Expected: nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?
473
+ # Try: nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?
474
+ ```
475
+
476
+
477
+
478
+ ### IPA Input
479
+
480
+ Use `/.../` to wrap IPA sequences so they are distinct from normal text. You can use [DeepPhonemizer](https://github.com/spring-media/DeepPhonemizer) to convert English paragraphs or words into IPA sequences.
481
+
482
+ ```python
483
+ from dp.phonemizer import Phonemizer
484
+
485
+ # Download a phonemizer checkpoint from https://public-asai-dl-models.s3.eu-central-1.amazonaws.com/DeepPhonemizer/en_us_cmudict_ipa_forward.pt
486
+ model_path = "<path-to-phonemizer-checkpoint>"
487
+ phonemizer = Phonemizer.from_checkpoint(model_path)
488
+
489
+ english_texts = "Hello, may I ask which city you are from?"
490
+ phoneme_outputs = phonemizer(
491
+ english_texts,
492
+ lang="en_us",
493
+ batch_size=8
494
+ )
495
+ model_input_text = f"/{phoneme_outputs}/"
496
+ print(model_input_text)
497
+
498
+ # Expected: /həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/
499
+ ```
500
+
501
+
502
+
503
+ ## 3. Evaluation
504
+ MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval, not only surpassing all open-source models but also rivaling the most powerful closed-source models.
505
+
506
+ | Model | Params | Open-source | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
507
+ |---|---:|:---:|---:|---:|---:|---:|
508
+ | DiTAR | 0.6B | ❌ | 1.69 | 73.5 | 1.02 | 75.3 |
509
+ | FishAudio-S1 | 4B | ❌ | 1.72 | 62.57 | 1.22 | 72.1 |
510
+ | Seed-TTS | | ❌ | 2.25 | 76.2 | 1.12 | 79.6 |
511
+ | MiniMax-Speech | | ❌ | 1.65 | 69.2 | 0.83 | 78.3 |
512
+ | | | | | | | |
513
+ | CosyVoice | 0.3B | ✅ | 4.29 | 60.9 | 3.63 | 72.3 |
514
+ | CosyVoice2 | 0.5B | ✅ | 3.09 | 65.9 | 1.38 | 75.7 |
515
+ | CosyVoice3 | 0.5B | ✅ | 2.02 | 71.8 | 1.16 | 78 |
516
+ | CosyVoice3 | 1.5B | ✅ | 2.22 | 72 | 1.12 | 78.1 |
517
+ | F5-TTS | 0.3B | ✅ | 2 | 67 | 1.53 | 76 |
518
+ | SparkTTS | 0.5B | ✅ | 3.14 | 57.3 | 1.54 | 66 |
519
+ | FireRedTTS | 0.5B | ✅ | 3.82 | 46 | 1.51 | 63.5 |
520
+ | FireRedTTS-2 | 1.5B | ✅ | 1.95 | 66.5 | 1.14 | 73.6 |
521
+ | Qwen2.5-Omni | 7B | ✅ | 2.72 | 63.2 | 1.7 | 75.2 |
522
+ | FishAudio-S1-mini | 0.5B | ✅ | 1.94 | 55 | 1.18 | 68.5 |
523
+ | IndexTTS2 | 1.5B | ✅ | 2.23 | 70.6 | 1.03 | 76.5 |
524
+ | VibeVoice | 1.5B | ✅ | 3.04 | 68.9 | 1.16 | 74.4 |
525
+ | HiggsAudio-v2 | 3B | ✅ | 2.44 | 67.7 | 1.5 | 74 |
526
+ | VoxCPM | 0.5B | ✅ | 1.85 | 72.9 | **0.93** | 77.2 |
527
+ | Qwen3-TTS | 0.6B | ✅ | 1.68 | 70.39 | 1.23 | 76.4 |
528
+ | Qwen3-TTS | 1.7B | ✅ | **1.5** | 71.45 | 1.33 | 76.72 |
529
+ | | | | | | | |
530
+ | MossTTSDelay | 8B | ✅ | 1.79 | 71.46 | 1.32 | 77.05 |
531
+ | MossTTSLocal | 1.7B | ✅ | 1.85 | **73.42** | 1.2 | **78.82** |