update readme & fix inference error
Browse files- README.md +32 -0
- modeling_moss_tts.py +15 -5
- processing_moss_tts.py +21 -25
README.md
CHANGED
|
@@ -1,3 +1,35 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
+
# MOSS-TTS Family
|
| 5 |
+
|
| 6 |
+
## Overview
|
| 7 |
+
MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
## Introduction
|
| 11 |
+
|
| 12 |
+
<p align="center">
|
| 13 |
+
<img src="./assets/moss_tts_family.jpeg" width="85%" />
|
| 14 |
+
</p>
|
| 15 |
+
|
| 16 |
+
When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
|
| 17 |
+
|
| 18 |
+
- **MOSS‑TTS**: MOSS-TTS is the flagship, production-ready Text-to-Speech foundation model in the MOSS-TTS Family, built to ship, scale, and deliver real-world voice applications beyond demos. It provides high-fidelity zero-shot voice cloning as the core capability, along with ultra-long speech generation, token-level duration control, multilingual and code-switched synthesis, and fine-grained Pinyin/phoneme pronunciation control. Together, these features make it a robust base model for scalable narration, dubbing, and voice-driven products.
|
| 19 |
+
- **MOSS‑TTSD**: MOSS-TTSD is a production-oriented long-form spoken dialogue generation model for creating highly expressive, multi-party conversational audio at scale. It supports continuous long-duration generation, flexible multi-speaker turn-taking control, and zero-shot voice cloning from short reference audio, enabling natural conversations with rich interaction dynamics. It is designed for real-world long-form content such as podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
|
| 20 |
+
- **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design system that generates speaker timbres directly from free-form text descriptions, enabling fast creation of voices for characters, personalities, and emotions—without requiring reference audio. It unifies timbre design, style control, and content synthesis in a single instruction-driven model, producing high-fidelity, emotionally expressive speech that feels naturally human. It can be used standalone for creative production, or as a voice design layer that improves integration and usability for downstream TTS systems.
|
| 21 |
+
- **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity sound effect generation model built for real-world content creation, offering strong environmental richness, broad category coverage, and reliable duration controllability. Trained on large-scale, high-quality data, it generates consistent audio from text prompts across natural ambience, urban scenes, creatures, human actions, and music-like clips. It is well suited for film and game production, interactive experiences, and data synthesis pipelines.
|
| 22 |
+
- **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS foundation model designed for real-time voice agents. Unlike conventional TTS that synthesizes replies in isolation, it conditions generation on multi-turn dialogue history—including both textual and acoustic signals from prior user speech—so responses stay coherent, consistent, and natural across turns. With low-latency incremental synthesis and strong voice stability, it enables truly conversational, human-like real-time speech experiences.
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
## Released Models
|
| 26 |
+
|
| 27 |
+
| Model | Architecture | Size | Model Card | Hugging Face |
|
| 28 |
+
|---|---|---:|---|---|
|
| 29 |
+
| **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
|
| 30 |
+
| | MossTTSLocal | 1.7B | [moss_tts_model_card.md](moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
|
| 31 |
+
| **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
|
| 32 |
+
| **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
|
| 33 |
+
| **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
|
| 34 |
+
| **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
|
| 35 |
+
|
modeling_moss_tts.py
CHANGED
|
@@ -398,9 +398,9 @@ class MossTTSDelayModel(MossTTSDelayPreTrainedModel):
|
|
| 398 |
text_temperature: float = 1.5,
|
| 399 |
text_top_p: float = 1.0,
|
| 400 |
text_top_k: int = 50,
|
| 401 |
-
audio_temperature: float = 1.
|
| 402 |
audio_top_p: float = 0.8,
|
| 403 |
-
audio_top_k: int =
|
| 404 |
audio_repetition_penalty: float = 1.0,
|
| 405 |
):
|
| 406 |
if text_temperature > 0:
|
|
@@ -495,11 +495,21 @@ class MossTTSDelayModel(MossTTSDelayPreTrainedModel):
|
|
| 495 |
next_audio_tokens[~sampling_audio_mask] = self.config.audio_pad_code
|
| 496 |
|
| 497 |
if sampling_audio_mask.sum() > 0:
|
| 498 |
-
|
|
|
|
|
|
|
| 499 |
audio_logits[..., self.config.audio_pad_code] = float('-inf')
|
| 500 |
-
next_audio_tokens[sampling_audio_mask] = sample_token(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 501 |
logits=audio_logits,
|
| 502 |
-
prev_tokens=generation_ids[:, :,
|
| 503 |
repetition_penalty=audio_repetition_penalty,
|
| 504 |
top_p=audio_top_p,
|
| 505 |
top_k=audio_top_k,
|
|
|
|
| 398 |
text_temperature: float = 1.5,
|
| 399 |
text_top_p: float = 1.0,
|
| 400 |
text_top_k: int = 50,
|
| 401 |
+
audio_temperature: float = 1.7,
|
| 402 |
audio_top_p: float = 0.8,
|
| 403 |
+
audio_top_k: int = 25,
|
| 404 |
audio_repetition_penalty: float = 1.0,
|
| 405 |
):
|
| 406 |
if text_temperature > 0:
|
|
|
|
| 495 |
next_audio_tokens[~sampling_audio_mask] = self.config.audio_pad_code
|
| 496 |
|
| 497 |
if sampling_audio_mask.sum() > 0:
|
| 498 |
+
audio_ch0_logits = next_token_logits[1][sampling_audio_mask[:, 0]]
|
| 499 |
+
audio_logits = torch.stack(next_token_logits[2:], dim=1)[sampling_audio_mask[:, 1:]]
|
| 500 |
+
audio_ch0_logits[..., self.config.audio_pad_code] = float('-inf')
|
| 501 |
audio_logits[..., self.config.audio_pad_code] = float('-inf')
|
| 502 |
+
next_audio_tokens[:, 0][sampling_audio_mask[:, 0]] = sample_token(
|
| 503 |
+
logits=audio_ch0_logits,
|
| 504 |
+
prev_tokens=generation_ids[:, :, 1],
|
| 505 |
+
repetition_penalty=audio_repetition_penalty,
|
| 506 |
+
top_p=audio_top_p,
|
| 507 |
+
top_k=audio_top_k,
|
| 508 |
+
do_sample=audio_do_sample
|
| 509 |
+
)
|
| 510 |
+
next_audio_tokens[:, 1:][sampling_audio_mask[:, 1:]] = sample_token(
|
| 511 |
logits=audio_logits,
|
| 512 |
+
prev_tokens=generation_ids[:, :, 2:],
|
| 513 |
repetition_penalty=audio_repetition_penalty,
|
| 514 |
top_p=audio_top_p,
|
| 515 |
top_k=audio_top_k,
|
processing_moss_tts.py
CHANGED
|
@@ -88,7 +88,7 @@ class UserMessage(Message):
|
|
| 88 |
reference = []
|
| 89 |
for speaker_idx, speaker_reference in enumerate(self.reference):
|
| 90 |
if speaker_reference is not None:
|
| 91 |
-
reference.append(f"[S{speaker_idx}]:\n{AUDIO_PLACEHOLDER}")
|
| 92 |
reference = "\n".join(reference)
|
| 93 |
audio_codes_list = [
|
| 94 |
speaker_reference
|
|
@@ -548,6 +548,7 @@ class MossTTSDelayProcessor(ProcessorMixin):
|
|
| 548 |
"""
|
| 549 |
if role == "user":
|
| 550 |
audio_gen_slot_token = audio_delay_slot_token = self.audio_user_slot_token
|
|
|
|
| 551 |
else:
|
| 552 |
audio_gen_slot_token = self.audio_assistant_gen_slot_token
|
| 553 |
audio_delay_slot_token = self.audio_assistant_delay_slot_token
|
|
@@ -895,30 +896,25 @@ class MossTTSDelayProcessor(ProcessorMixin):
|
|
| 895 |
codes.transpose(0, 1).contiguous().to(device=device, dtype=torch.long)
|
| 896 |
for codes in audio_tokens_list
|
| 897 |
]
|
| 898 |
-
|
| 899 |
-
|
| 900 |
-
|
| 901 |
-
|
| 902 |
-
|
| 903 |
-
|
| 904 |
-
|
| 905 |
-
|
| 906 |
-
max_t =
|
| 907 |
-
|
| 908 |
-
|
| 909 |
-
)
|
| 910 |
-
|
| 911 |
-
|
| 912 |
-
|
| 913 |
-
|
| 914 |
-
|
| 915 |
-
|
| 916 |
-
|
| 917 |
-
dec = audio_tokenizer.decode(
|
| 918 |
-
audio_codes, padding_mask=padding_mask, return_dict=True
|
| 919 |
-
)
|
| 920 |
-
audio = dec.audio
|
| 921 |
-
audio_lengths = dec.audio_lengths
|
| 922 |
|
| 923 |
if audio is None or audio_lengths is None:
|
| 924 |
raise RuntimeError(
|
|
|
|
| 88 |
reference = []
|
| 89 |
for speaker_idx, speaker_reference in enumerate(self.reference):
|
| 90 |
if speaker_reference is not None:
|
| 91 |
+
reference.append(f"[S{speaker_idx+1}]:\n{AUDIO_PLACEHOLDER}")
|
| 92 |
reference = "\n".join(reference)
|
| 93 |
audio_codes_list = [
|
| 94 |
speaker_reference
|
|
|
|
| 548 |
"""
|
| 549 |
if role == "user":
|
| 550 |
audio_gen_slot_token = audio_delay_slot_token = self.audio_user_slot_token
|
| 551 |
+
truncation = False
|
| 552 |
else:
|
| 553 |
audio_gen_slot_token = self.audio_assistant_gen_slot_token
|
| 554 |
audio_delay_slot_token = self.audio_assistant_delay_slot_token
|
|
|
|
| 896 |
codes.transpose(0, 1).contiguous().to(device=device, dtype=torch.long)
|
| 897 |
for codes in audio_tokens_list
|
| 898 |
]
|
| 899 |
+
|
| 900 |
+
# Fallback: pad to (NQ, B, T) + mask, then decode.
|
| 901 |
+
nq = int(codes_list[0].shape[0])
|
| 902 |
+
max_t = max(int(c.shape[1]) for c in codes_list)
|
| 903 |
+
audio_codes = torch.zeros(
|
| 904 |
+
nq, len(codes_list), max_t, device=device, dtype=torch.long
|
| 905 |
+
)
|
| 906 |
+
padding_mask = torch.zeros(
|
| 907 |
+
len(codes_list), max_t, device=device, dtype=torch.bool
|
| 908 |
+
)
|
| 909 |
+
for i, c in enumerate(codes_list):
|
| 910 |
+
t = int(c.shape[1])
|
| 911 |
+
audio_codes[:, i, :t] = c
|
| 912 |
+
padding_mask[i, :t] = True
|
| 913 |
+
dec = audio_tokenizer.decode(
|
| 914 |
+
audio_codes, padding_mask=padding_mask, return_dict=True, chunk_duration=8
|
| 915 |
+
)
|
| 916 |
+
audio = dec.audio
|
| 917 |
+
audio_lengths = dec.audio_lengths
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 918 |
|
| 919 |
if audio is None or audio_lengths is None:
|
| 920 |
raise RuntimeError(
|