Update model card with library name, pipeline tag, and paper link

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +44 -348
README.md CHANGED
@@ -1,7 +1,4 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - text-to-speech
5
  language:
6
  - zh
7
  - en
@@ -23,9 +20,17 @@ language:
23
  - hu
24
  - el
25
  - tr
 
 
 
 
 
 
26
  ---
 
27
  # MOSS-TTS Family
28
 
 
29
 
30
  <br>
31
 
@@ -34,13 +39,11 @@ language:
34
  <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/openmoss_x_mosi" height="50" align="middle" />
35
  </p>
36
 
37
-
38
-
39
  <div align="center">
40
  <a href="https://github.com/OpenMOSS/MOSS-TTS/tree/main"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
41
  <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
42
  <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
43
- <a href="https://github.com/OpenMOSS/MOSS-TTS"><img src="https://img.shields.io/badge/Arxiv-Coming%20soon-red?logo=arxiv&amp"></a>
44
 
45
  <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
46
  <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
@@ -82,17 +85,7 @@ When a single piece of audio needs to **sound like a real person**, **pronounce
82
 
83
  ## Supported Languages
84
 
85
- MOSS-TTS, MOSS-TTSD and MOSS-TTS-Realtime currently supports **20 languages**:
86
-
87
- | Language | Code | Flag | Language | Code | Flag | Language | Code | Flag |
88
- |---|---|---|---|---|---|---|---|---|
89
- | Chinese | zh | 🇨🇳 | English | en | 🇺🇸 | German | de | 🇩🇪 |
90
- | Spanish | es | 🇪🇸 | French | fr | 🇫🇷 | Japanese | ja | 🇯🇵 |
91
- | Italian | it | 🇮🇹 | Hebrew | he | 🇮🇱 | Korean | ko | 🇰🇷 |
92
- | Russian | ru | 🇷🇺 | Persian (Farsi) | fa | 🇮🇷 | Arabic | ar | 🇸🇦 |
93
- | Polish | pl | 🇵🇱 | Portuguese | pt | 🇵🇹 | Czech | cs | 🇨🇿 |
94
- | Danish | da | 🇩🇰 | Swedish | sv | 🇸🇪 | Hungarian | hu | 🇭🇺 |
95
- | Greek | el | 🇬🇷 | Turkish | tr | 🇹🇷 | | | |
96
 
97
  # MOSS-TTS
98
  **MOSS-TTS** is a next-generation, production-grade TTS foundation model focused on **voice cloning**, **ultra-long stable speech generation**, **token-level duration control**, **multilingual & code-switched synthesis**, and **fine-grained Pinyin/phoneme-level pronunciation control**. It is built on a clean autoregressive discrete-token recipe that emphasizes high-quality audio tokenization, large-scale diverse pre-training data, and efficient discrete token modeling.
@@ -141,11 +134,6 @@ MOSS-TTS includes **two complementary architectures**, both trained and released
141
  - A lightweight **Local Transformer** emits a token block per step.
142
  - **Streaming-friendly** with simpler alignment (no delay scheduling).
143
 
144
- **Why train both?**
145
- - **Exploration of architectural potential** and validation across multiple generation paradigms.
146
- - **Different tradeoffs**: Delay pattern tends to be faster and more stable for long-form synthesis; Local is smaller and excels on objective benchmarks.
147
- - **Open-source value**: two strong baselines for research, ablation, and downstream innovation.
148
-
149
  For full details, see:
150
  - **[moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md)**
151
  - **[moss_tts_local/README.md](https://github.com/OpenMOSS/MOSS-TTS/tree/main/moss_tts_local)**
@@ -174,7 +162,7 @@ For full details, see:
174
 
175
  ### Environment Setup
176
 
177
- We recommend a clean, isolated Python environment with **Transformers 5.0.0** to avoid dependency conflicts.
178
 
179
  ```bash
180
  conda create -n moss-tts python=3.12 -y
@@ -189,36 +177,9 @@ cd MOSS-TTS
189
  pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e .
190
  ```
191
 
192
- #### (Optional) Install FlashAttention 2
193
-
194
- For better speed and lower GPU memory usage, you can install FlashAttention 2 if your hardware supports it.
195
-
196
- ```bash
197
- pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
198
- ```
199
-
200
- If your machine has limited RAM and many CPU cores, you can cap build parallelism:
201
-
202
- ```bash
203
- MAX_JOBS=4 pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
204
- ```
205
-
206
- Notes:
207
- - Dependencies are managed in `pyproject.toml`, which currently pins `torch==2.9.1+cu128` and `torchaudio==2.9.1+cu128`.
208
- - If FlashAttention 2 fails to build on your machine, you can skip it and use the default attention backend.
209
- - FlashAttention 2 is only available on supported GPUs and is typically used with `torch.float16` or `torch.bfloat16`.
210
-
211
-
212
  ### Basic Usage
213
 
214
-
215
-
216
- > Tip: For production usage, prioritize **MossTTSDelay-8B**. The examples below use this model; **MossTTSLocal-1.7B** supports the same API, and a practical walkthrough is available in [moss_tts_local/README.md](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer).
217
-
218
- MOSS-TTS provides a convenient `generate` interface for rapid usage. The examples below cover:
219
- 1. Direct generation (Chinese / English / Pinyin / IPA)
220
- 2. Voice cloning
221
- 3. Duration control
222
 
223
  ```python
224
  from pathlib import Path
@@ -226,198 +187,41 @@ import importlib.util
226
  import torch
227
  import torchaudio
228
  from transformers import AutoModel, AutoProcessor
229
- # Disable the broken cuDNN SDPA backend
230
- torch.backends.cuda.enable_cudnn_sdp(False)
231
- # Keep these enabled as fallbacks
232
- torch.backends.cuda.enable_flash_sdp(True)
233
- torch.backends.cuda.enable_mem_efficient_sdp(True)
234
- torch.backends.cuda.enable_math_sdp(True)
235
-
236
 
237
  pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
238
  device = "cuda" if torch.cuda.is_available() else "cpu"
239
  dtype = torch.bfloat16 if device == "cuda" else torch.float32
240
 
241
- def resolve_attn_implementation() -> str:
242
- # Prefer FlashAttention 2 when package + device conditions are met.
243
- if (
244
- device == "cuda"
245
- and importlib.util.find_spec("flash_attn") is not None
246
- and dtype in {torch.float16, torch.bfloat16}
247
- ):
248
- major, _ = torch.cuda.get_device_capability()
249
- if major >= 8:
250
- return "flash_attention_2"
251
-
252
- # CUDA fallback: use PyTorch SDPA kernels.
253
- if device == "cuda":
254
- return "sdpa"
255
-
256
- # CPU fallback.
257
- return "eager"
258
-
259
-
260
- attn_implementation = resolve_attn_implementation()
261
- print(f"[INFO] Using attn_implementation={attn_implementation}")
262
-
263
  processor = AutoProcessor.from_pretrained(
264
  pretrained_model_name_or_path,
265
  trust_remote_code=True,
266
  )
267
  processor.audio_tokenizer = processor.audio_tokenizer.to(device)
268
 
269
- text_1 = "亲爱的你,\n你好呀。\n\n今天,我想用最认真、最温柔的声音,对你说一些重要的话。\n这些话,像一颗小小的星星,希望能在你的心里慢慢发光。\n\n首先,我想祝你——\n每天都能平平安安、快快乐乐。\n\n希望你早上醒来的时候,\n窗外有光,屋子里很安静,\n你的心是轻轻的,没有着急,也没有害怕。\n\n希望你吃饭的时候胃口很好,\n走路的时候脚步稳稳,\n晚上睡觉的时候,能做一个又一个甜甜的梦。\n\n我希望你能一直保持好奇心。\n对世界充满问题,\n对天空、星星、花草、书本和故事感兴趣。\n当你问“为什么”的时候,\n希望总有人愿意认真地听你说话。\n\n我也希望你学会温柔。\n温柔地对待朋友,\n温柔地对待小动物,\n也温柔地对待自己。\n\n如果有一天你犯了错,\n请不要太快责怪自己,\n因为每一个认真成长的人,\n都会在路上慢慢学会更好的方法。\n\n愿你拥有勇气。\n当你站在陌生的地方时,\n当你第一次举手发言时,\n当你遇到困难、感到害怕的时候,\n希望你能轻轻地告诉自己:\n“我可以试一试。”\n\n就算没有一次成功,也没有关系。\n失败不是坏事,\n它只是告诉你,你正在努力。\n\n我希望你学会分享快乐。\n把开心的事情告诉别人,\n把笑声送给身边的人,\n因为快乐被分享的时候,\n会变得更大、更亮。\n\n如果有一天你感到难过,\n我希望你知道——\n难过并不丢脸,\n哭泣也不是软弱。\n\n愿你能找到一个安全的地方,\n慢慢把心里的话说出来,\n然后再一次抬起头,看见希望。\n\n我还希望你能拥有梦想。\n这个梦想也许很大,\n也许很小,\n也许现在还说不清楚。\n\n没关系。\n梦想会和你一起长大,\n在时间里慢慢变得清楚。\n\n最后,我想送你一个最最重要的祝福:\n\n愿你被世界温柔对待,\n也愿你成为一个温柔的人。\n\n愿你的每一天,\n都值得被记住,\n都值得被珍惜。\n\n亲爱的你,\n请记住,\n你是独一无二的,\n你已经很棒了,\n而你的未来,\n一定会慢慢变得闪闪发光。\n\n祝你健康、勇敢、幸福,\n祝你永远带着笑容向前走。"
270
- text_2 = "We stand on the threshold of the AI era.\nArtificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."
271
- text_3 = "nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?"
272
- text_4 = "nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?"
273
- text_5 = "您好,请问您来自哪 zuo4 cheng2 shi4?"
274
- text_6 = "/həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/"
275
-
276
- # Use audio from ./assets/audio to avoid downloading from the cloud.
277
- ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
278
- ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
279
-
280
- conversations = [
281
- # Direct TTS (no reference)
282
- [processor.build_user_message(text=text_1)],
283
- [processor.build_user_message(text=text_2)],
284
- # Pinyin or IPA input
285
- [processor.build_user_message(text=text_3)],
286
- [processor.build_user_message(text=text_4)],
287
- [processor.build_user_message(text=text_5)],
288
- [processor.build_user_message(text=text_6)],
289
- # Voice cloning (with reference)
290
- [processor.build_user_message(text=text_1, reference=[ref_audio_1])],
291
- [processor.build_user_message(text=text_2, reference=[ref_audio_2])],
292
- # Duration control
293
- [processor.build_user_message(text=text_2, tokens=325)],
294
- [processor.build_user_message(text=text_2, tokens=600)],
295
- ]
296
-
297
  model = AutoModel.from_pretrained(
298
  pretrained_model_name_or_path,
299
  trust_remote_code=True,
300
- # If FlashAttention 2 is installed, you can set attn_implementation="flash_attention_2"
301
- attn_implementation=attn_implementation,
302
  torch_dtype=dtype,
303
  ).to(device)
304
  model.eval()
305
 
306
- batch_size = 1
307
-
308
- save_dir = Path("inference_root")
309
- save_dir.mkdir(exist_ok=True, parents=True)
310
- sample_idx = 0
311
- with torch.no_grad():
312
- for start in range(0, len(conversations), batch_size):
313
- batch_conversations = conversations[start : start + batch_size]
314
- batch = processor(batch_conversations, mode="generation")
315
- input_ids = batch["input_ids"].to(device)
316
- attention_mask = batch["attention_mask"].to(device)
317
-
318
- outputs = model.generate(
319
- input_ids=input_ids,
320
- attention_mask=attention_mask,
321
- max_new_tokens=4096,
322
- )
323
-
324
- for message in processor.decode(outputs):
325
- audio = message.audio_codes_list[0]
326
- out_path = save_dir / f"sample{sample_idx}.wav"
327
- sample_idx += 1
328
- torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
329
-
330
- ```
331
-
332
- ### Continuation + Voice Cloning (Prefix Audio + Text)
333
-
334
- MOSS-TTS supports continuation-based cloning: provide a prefix audio clip in the assistant message, and make sure the **prefix transcript** is included in the text. The model continues in the same speaker identity and style.
335
-
336
- ```python
337
- from pathlib import Path
338
- import importlib.util
339
- import torch
340
- import torchaudio
341
- from transformers import AutoModel, AutoProcessor
342
- # Disable the broken cuDNN SDPA backend
343
- torch.backends.cuda.enable_cudnn_sdp(False)
344
- # Keep these enabled as fallbacks
345
- torch.backends.cuda.enable_flash_sdp(True)
346
- torch.backends.cuda.enable_mem_efficient_sdp(True)
347
- torch.backends.cuda.enable_math_sdp(True)
348
-
349
-
350
- pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
351
- device = "cuda" if torch.cuda.is_available() else "cpu"
352
- dtype = torch.bfloat16 if device == "cuda" else torch.float32
353
-
354
- def resolve_attn_implementation() -> str:
355
- # Prefer FlashAttention 2 when package + device conditions are met.
356
- if (
357
- device == "cuda"
358
- and importlib.util.find_spec("flash_attn") is not None
359
- and dtype in {torch.float16, torch.bfloat16}
360
- ):
361
- major, _ = torch.cuda.get_device_capability()
362
- if major >= 8:
363
- return "flash_attention_2"
364
-
365
- # CUDA fallback: use PyTorch SDPA kernels.
366
- if device == "cuda":
367
- return "sdpa"
368
-
369
- # CPU fallback.
370
- return "eager"
371
 
372
-
373
- attn_implementation = resolve_attn_implementation()
374
- print(f"[INFO] Using attn_implementation={attn_implementation}")
375
-
376
- processor = AutoProcessor.from_pretrained(
377
- pretrained_model_name_or_path,
378
- trust_remote_code=True
379
- )
380
- processor.audio_tokenizer = processor.audio_tokenizer.to(device)
381
-
382
- text_1 = "亲爱的你,\n你好呀。\n\n今天,我想用最认真、最温柔的声音,对你说一些重要的话。\n这些话,像一颗小小的星星,希望能在你的心里慢慢发光。\n\n首先,我想祝你——\n每天都能平平安安、快快乐乐。\n\n希望你早上醒来的时候,\n窗外有光,屋子里很安静,\n你的心是轻轻的,没有着急,也没有害怕。\n\n希望你吃饭的时候胃口很好,\n走路的时候脚步稳稳,\n晚上睡觉的时候,能做一个又一个甜甜的梦。\n\n���希望你能一直保持好奇心。\n对世界充满问题,\n对天空、星星、花草、书本和故事感兴趣。\n当你问“为什么”的时候,\n希望总有人愿意认真地听你说话。\n\n我也希望你学会温柔。\n温柔地对待朋友,\n温柔地对待小动物,\n也温柔地对待自己。\n\n如果有一天你犯了错,\n请不要太快责怪自己,\n因为每一个认真成长的人,\n都会在路上慢慢学会更好的方法。\n\n愿你拥有勇气。\n当你站在陌生的地方时,\n当你第一次举手发言时,\n当你遇到困难、感到害怕的时候,\n希望你能轻轻地告诉自己:\n“我可以试一试。”\n\n就算没有一次成功,也没有关系。\n失败不是坏事,\n它只是告诉你,你正在努力。\n\n我希望你学会分享快乐。\n把开心的事情告诉别人,\n把笑声送给身边的人,\n因为快乐被分享的时候,\n会变得更大、更亮。\n\n如果有一天你感到难过,\n我希望你知道——\n难过并不丢脸,\n哭泣也不是软弱。\n\n愿你能找到一个安全的地方,\n慢慢把心里的话说出来,\n然后再一次抬起头,看见希望。\n\n我还希望你能拥有梦想。\n这个梦想也许很大,\n也许很小,\n也许现在还说不清楚。\n\n没关系。\n梦想会和你一起长大,\n在时间里慢慢变得清楚。\n\n最后,我想送你一个最最重要的祝福:\n\n愿你被世界温柔对待,\n也愿你成为一个温柔的人。\n\n愿你的每一天,\n都值得被记住,\n都值得被珍惜。\n\n亲爱的你,\n请记住,\n你是独一无二的,\n你已经很棒了,\n而你的未来,\n一定会慢慢变得闪闪发光。\n\n祝你健康、勇敢、幸福,\n祝你永远带着笑容向前走。"
383
- text_2 = "We stand on the threshold of the AI era.\nArtificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."
384
- ref_text_1 = "太阳系八大行星之一。"
385
- ref_text_2 = "But I really can't complain about not having a normal college experience to you."
386
- # Use audio from ./assets/audio to avoid downloading from the cloud.
387
- ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
388
- ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
389
 
390
  conversations = [
391
- # Continuatoin only
392
- [
393
- processor.build_user_message(text=ref_text_1 + text_1),
394
- processor.build_assistant_message(audio_codes_list=[ref_audio_1])
395
- ],
396
- # Continuation with voice cloning
397
- [
398
- processor.build_user_message(text=ref_text_2 + text_2, reference=[ref_audio_2]),
399
- processor.build_assistant_message(audio_codes_list=[ref_audio_2])
400
- ],
401
  ]
402
 
403
- model = AutoModel.from_pretrained(
404
- pretrained_model_name_or_path,
405
- trust_remote_code=True,
406
- # If FlashAttention 2 is installed, you can set attn_implementation="flash_attention_2"
407
- attn_implementation=attn_implementation,
408
- torch_dtype=dtype,
409
- ).to(device)
410
- model.eval()
411
-
412
- batch_size = 1
413
-
414
- save_dir = Path("inference_root")
415
- save_dir.mkdir(exist_ok=True, parents=True)
416
- sample_idx = 0
417
  with torch.no_grad():
418
- for start in range(0, len(conversations), batch_size):
419
- batch_conversations = conversations[start : start + batch_size]
420
- batch = processor(batch_conversations, mode="continuation")
421
  input_ids = batch["input_ids"].to(device)
422
  attention_mask = batch["attention_mask"].to(device)
423
 
@@ -429,135 +233,27 @@ with torch.no_grad():
429
 
430
  for message in processor.decode(outputs):
431
  audio = message.audio_codes_list[0]
432
- out_path = save_dir / f"sample{sample_idx}.wav"
433
- sample_idx += 1
434
- torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
435
-
436
- ```
437
-
438
-
439
-
440
- ### Input Types
441
-
442
- **UserMessage**
443
-
444
- | Field | Type | Required | Description |
445
- |---|---|---:|---|
446
- | `text` | `str` | Yes | Text to synthesize. Supports Chinese, English, German, French, Spanish, Japanese, Korean, etc. Can mix raw text with Pinyin or IPA for pronunciation control. |
447
- | `reference` | `List[str]` | No | Reference audio for voice cloning. For current MOSS-TTS, **one audio** is expected in the list. |
448
- | `tokens` | `int` | No | Expected number of audio tokens. **1s ≈ 12.5 tokens**. |
449
-
450
- **AssistantMessage**
451
-
452
- | Field | Type | Required | Description |
453
- |---|---|---:|---|
454
- | `audio_codes_list` | `List[str]` | Only for continuation | Prefix audio for continuation-based cloning. Use audio file paths or URLs. |
455
-
456
-
457
-
458
- ### Generation Hyperparameters
459
-
460
- | Parameter | Type | Default | Description |
461
- |---|---|---:|---|
462
- | `max_new_tokens` | `int` | — | Controls total generated audio tokens. Use duration rule: **1s ≈ 12.5 tokens**. |
463
- | `audio_temperature` | `float` | 1.7 | Higher values increase variation; lower values stabilize prosody. |
464
- | `audio_top_p` | `float` | 0.8 | Nucleus sampling cutoff. Lower values are more conservative. |
465
- | `audio_top_k` | `int` | 25 | Top-K sampling. Lower values tighten sampling space. |
466
- | `audio_repetition_penalty` | `float` | 1.0 | >1.0 discourages repeating patterns. |
467
-
468
- > Note: MOSS-TTS is a pretrained base model and is **sensitive to decoding hyperparameters**. See **Released Models** for recommended defaults.
469
-
470
-
471
-
472
- ### Pinyin Input
473
-
474
- Use tone-numbered Pinyin such as `ni3 hao3 wo3 men1`. You can convert Chinese text with [pypinyin](https://github.com/mozillazg/python-pinyin), then adjust tones for pronunciation control.
475
-
476
- ```python
477
- import re
478
- from pypinyin import pinyin, Style
479
-
480
- CN_PUNCT = r",。!?;:、()“”‘’"
481
-
482
-
483
- def fix_punctuation_spacing(s: str) -> str:
484
- s = re.sub(rf"\s+([{CN_PUNCT}])", r"\1", s)
485
- s = re.sub(rf"([{CN_PUNCT}])\s+", r"\1", s)
486
- return s
487
-
488
-
489
- def zh_to_pinyin_tone3(text: str, strict: bool = True) -> str:
490
- result = pinyin(
491
- text,
492
- style=Style.TONE3,
493
- heteronym=False,
494
- strict=strict,
495
- errors="default",
496
- )
497
-
498
- s = " ".join(item[0] for item in result)
499
- return fix_punctuation_spacing(s)
500
-
501
- text = zh_to_pinyin_tone3("您好,请问您来自哪座城市?")
502
- print(text)
503
-
504
- # Expected: nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?
505
- # Try: nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?
506
- ```
507
-
508
-
509
-
510
- ### IPA Input
511
-
512
- Use `/.../` to wrap IPA sequences so they are distinct from normal text. You can use [DeepPhonemizer](https://github.com/spring-media/DeepPhonemizer) to convert English paragraphs or words into IPA sequences.
513
-
514
- ```python
515
- from dp.phonemizer import Phonemizer
516
-
517
- # Download a phonemizer checkpoint from https://public-asai-dl-models.s3.eu-central-1.amazonaws.com/DeepPhonemizer/en_us_cmudict_ipa_forward.pt
518
- model_path = "<path-to-phonemizer-checkpoint>"
519
- phonemizer = Phonemizer.from_checkpoint(model_path)
520
-
521
- english_texts = "Hello, may I ask which city you are from?"
522
- phoneme_outputs = phonemizer(
523
- english_texts,
524
- lang="en_us",
525
- batch_size=8
526
- )
527
- model_input_text = f"/{phoneme_outputs}/"
528
- print(model_input_text)
529
-
530
- # Expected: /həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/
531
  ```
532
 
533
-
534
-
535
  ## 3. Evaluation
536
- MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval, not only surpassing all open-source models but also rivaling the most powerful closed-source models.
537
-
538
- | Model | Params | Open-source | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
539
- |---|---:|:---:|---:|---:|---:|---:|
540
- | DiTAR | 0.6B | ❌ | 1.69 | 73.5 | 1.02 | 75.3 |
541
- | FishAudio-S1 | 4B | ❌ | 1.72 | 62.57 | 1.22 | 72.1 |
542
- | Seed-TTS | | ❌ | 2.25 | 76.2 | 1.12 | 79.6 |
543
- | MiniMax-Speech | | ❌ | 1.65 | 69.2 | 0.83 | 78.3 |
544
- | | | | | | | |
545
- | CosyVoice | 0.3B | ✅ | 4.29 | 60.9 | 3.63 | 72.3 |
546
- | CosyVoice2 | 0.5B | ✅ | 3.09 | 65.9 | 1.38 | 75.7 |
547
- | CosyVoice3 | 0.5B | | 2.02 | 71.8 | 1.16 | 78 |
548
- | CosyVoice3 | 1.5B | | 2.22 | 72 | 1.12 | 78.1 |
549
- | F5-TTS | 0.3B | ✅ | 2 | 67 | 1.53 | 76 |
550
- | SparkTTS | 0.5B | ✅ | 3.14 | 57.3 | 1.54 | 66 |
551
- | FireRedTTS | 0.5B | ✅ | 3.82 | 46 | 1.51 | 63.5 |
552
- | FireRedTTS-2 | 1.5B | ✅ | 1.95 | 66.5 | 1.14 | 73.6 |
553
- | Qwen2.5-Omni | 7B | ✅ | 2.72 | 63.2 | 1.7 | 75.2 |
554
- | FishAudio-S1-mini | 0.5B | ✅ | 1.94 | 55 | 1.18 | 68.5 |
555
- | IndexTTS2 | 1.5B | ✅ | 2.23 | 70.6 | 1.03 | 76.5 |
556
- | VibeVoice | 1.5B | ✅ | 3.04 | 68.9 | 1.16 | 74.4 |
557
- | HiggsAudio-v2 | 3B | ✅ | 2.44 | 67.7 | 1.5 | 74 |
558
- | VoxCPM | 0.5B | ✅ | 1.85 | 72.9 | **0.93** | 77.2 |
559
- | Qwen3-TTS | 0.6B | ✅ | 1.68 | 70.39 | 1.23 | 76.4 |
560
- | Qwen3-TTS | 1.7B | ✅ | **1.5** | 71.45 | 1.33 | 76.72 |
561
- | | | | | | | |
562
- | MossTTSDelay | 8B | ✅ | 1.79 | 71.46 | 1.32 | 77.05 |
563
- | MossTTSLocal | 1.7B | ✅ | 1.85 | **73.42** | 1.2 | **78.82** |
 
1
  ---
 
 
 
2
  language:
3
  - zh
4
  - en
 
20
  - hu
21
  - el
22
  - tr
23
+ license: apache-2.0
24
+ tags:
25
+ - text-to-speech
26
+ library_name: transformers
27
+ pipeline_tag: text-to-speech
28
+ arxiv: 2602.10934
29
  ---
30
+
31
  # MOSS-TTS Family
32
 
33
+ This repository contains the **MOSS-TTS Family**, a series of next-generation speech and sound generation models introduced in the paper [MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models](https://huggingface.co/papers/2602.10934).
34
 
35
  <br>
36
 
 
39
  <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/openmoss_x_mosi" height="50" align="middle" />
40
  </p>
41
 
 
 
42
  <div align="center">
43
  <a href="https://github.com/OpenMOSS/MOSS-TTS/tree/main"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
44
  <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
45
  <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
46
+ <a href="https://arxiv.org/abs/2602.10934"><img src="https://img.shields.io/badge/Arxiv-2602.10934-red?logo=arxiv&amp"></a>
47
 
48
  <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
49
  <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
 
85
 
86
  ## Supported Languages
87
 
88
+ MOSS-TTS, MOSS-TTSD and MOSS-TTS-Realtime currently supports **20 languages**: Chinese, English, German, Spanish, French, Japanese, Italian, Hebrew, Korean, Russian, Persian, Arabic, Polish, Portuguese, Czech, Danish, Swedish, Hungarian, Greek, and Turkish.
 
 
 
 
 
 
 
 
 
 
89
 
90
  # MOSS-TTS
91
  **MOSS-TTS** is a next-generation, production-grade TTS foundation model focused on **voice cloning**, **ultra-long stable speech generation**, **token-level duration control**, **multilingual & code-switched synthesis**, and **fine-grained Pinyin/phoneme-level pronunciation control**. It is built on a clean autoregressive discrete-token recipe that emphasizes high-quality audio tokenization, large-scale diverse pre-training data, and efficient discrete token modeling.
 
134
  - A lightweight **Local Transformer** emits a token block per step.
135
  - **Streaming-friendly** with simpler alignment (no delay scheduling).
136
 
 
 
 
 
 
137
  For full details, see:
138
  - **[moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md)**
139
  - **[moss_tts_local/README.md](https://github.com/OpenMOSS/MOSS-TTS/tree/main/moss_tts_local)**
 
162
 
163
  ### Environment Setup
164
 
165
+ We recommend a clean, isolated Python environment with **Transformers** to avoid dependency conflicts.
166
 
167
  ```bash
168
  conda create -n moss-tts python=3.12 -y
 
177
  pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e .
178
  ```
179
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
180
  ### Basic Usage
181
 
182
+ MOSS-TTS provides a convenient `generate` interface for rapid usage. The example below covers direct generation and voice cloning. Note that you must set `trust_remote_code=True` when loading the model and processor.
 
 
 
 
 
 
 
183
 
184
  ```python
185
  from pathlib import Path
 
187
  import torch
188
  import torchaudio
189
  from transformers import AutoModel, AutoProcessor
 
 
 
 
 
 
 
190
 
191
  pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS"
192
  device = "cuda" if torch.cuda.is_available() else "cpu"
193
  dtype = torch.bfloat16 if device == "cuda" else torch.float32
194
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
195
  processor = AutoProcessor.from_pretrained(
196
  pretrained_model_name_or_path,
197
  trust_remote_code=True,
198
  )
199
  processor.audio_tokenizer = processor.audio_tokenizer.to(device)
200
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
201
  model = AutoModel.from_pretrained(
202
  pretrained_model_name_or_path,
203
  trust_remote_code=True,
 
 
204
  torch_dtype=dtype,
205
  ).to(device)
206
  model.eval()
207
 
208
+ text_1 = "亲爱的你,
209
+ 你好呀。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
210
 
211
+ 今天,我想用最认真、最温柔的声音,对你说一些重要的话。
212
+ 这些话,像一颗小小的星星,希望能在你的心里慢慢发光。"
213
+ text_2 = "We stand on the threshold of the AI era.
214
+ Artificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision."
 
 
 
 
 
 
 
 
 
 
 
 
 
215
 
216
  conversations = [
217
+ [processor.build_user_message(text=text_1)],
218
+ [processor.build_user_message(text=text_2)],
 
 
 
 
 
 
 
 
219
  ]
220
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
221
  with torch.no_grad():
222
+ for start in range(0, len(conversations), 1):
223
+ batch_conversations = conversations[start : start + 1]
224
+ batch = processor(batch_conversations, mode="generation")
225
  input_ids = batch["input_ids"].to(device)
226
  attention_mask = batch["attention_mask"].to(device)
227
 
 
233
 
234
  for message in processor.decode(outputs):
235
  audio = message.audio_codes_list[0]
236
+ torchaudio.save("output.wav", audio.unsqueeze(0), processor.model_config.sampling_rate)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
237
  ```
238
 
 
 
239
  ## 3. Evaluation
240
+ MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval.
241
+
242
+ | Model | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
243
+ |---|---:|---:|---:|---:|
244
+ | MossTTSDelay-8B | 1.79 | 71.46 | 1.32 | 77.05 |
245
+ | MossTTSLocal-1.7B | 1.85 | **73.42** | 1.2 | **78.82** |
246
+
247
+ ## Citation
248
+ If you use this model in your research, please cite:
249
+ ```bibtex
250
+ @misc{gong2026mossaudiotokenizerscalingaudiotokenizers,
251
+ title={MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models},
252
+ author={Yitian Gong and Kuangwei Chen and Zhaoye Fei and Xiaogui Yang and Ke Chen and Yang Wang and Kexin Huang and Mingshu Chen and Ruixiao Li and Qingyuan Cheng and Shimin Li and Xipeng Qiu},
253
+ year={2026},
254
+ eprint={2602.10934},
255
+ archivePrefix={arXiv},
256
+ primaryClass={cs.SD},
257
+ url={https://arxiv.org/abs/2602.10934},
258
+ }
259
+ ```