File size: 6,413 Bytes
dc547c0 6b06f38 dc547c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
---
license: apache-2.0
language:
- sw
base_model:
- facebook/mms-tts
pipeline_tag: text-to-speech
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
tags:
- text-to-speech
- audio
- speech
- transformers
- vits
- swahili
---
# π SALAMA-TTS β Swahili Text-to-Speech Model
**Developer:** AI4NNOV
**Version:** v1.0
**License:** Apache 2.0
**Model Type:** Text-to-Speech (TTS)
**Base Model:** `facebook/mms-tts-swh` (fine-tuned)
---
## π Overview
**SALAMA-TTS** is the **speech synthesis module** of the **SALAMA Framework**, a complete end-to-end **Speech-to-Speech AI system** for African languages.
It generates **natural, high-quality Swahili speech** from text and integrates seamlessly with **SALAMA-LLM** and **SALAMA-STT** for conversational voice assistants.
The model is based on **Metaβs MMS (Massively Multilingual Speech)** TTS architecture using the **VITS framework**, fine-tuned for natural prosody, tone, and rhythm in Swahili.
---
## π§± Model Architecture
SALAMA-TTS is built on the **VITS architecture**, combining the strengths of **variational autoencoders (VAE)** and **GANs** for realistic and expressive speech synthesis.
| Parameter | Value |
|------------|--------|
| Base Model | `facebook/mms-tts-swh` |
| Fine-Tuning | 8-bit quantized, LoRA fine-tuning |
| Optimizer | AdamW |
| Learning Rate | 2e-5 |
| Epochs | 20 |
| Sampling Rate | 16kHz |
| Frameworks | Transformers + Datasets + PyTorch |
| Language | Swahili (`sw`) |
---
## π Dataset
| Dataset | Description | Purpose |
|----------|--------------|----------|
| `common_voice_17_0` | Swahili voice dataset by Mozilla | Base training |
| Custom Swahili speech corpus | Locally recorded sentences and dialogues | Fine-tuning naturalness |
| Evaluated on | Common Voice Swahili (test split) | Evaluation |
---
## π§ Model Capabilities
- Converts **Swahili text to natural-sounding speech**
- Handles **both formal and conversational** tone
- High clarity and prosody for long-form speech
- Seamless integration with **SALAMA-LLM** responses
- Output format: **16-bit PCM WAV**
---
## π Evaluation Metrics
| Metric | Score | Description |
|---------|-------|-------------|
| **MOS (Mean Opinion Score)** | **4.05 / 5.0** | Human-rated naturalness |
| **WER (Generated β STT)** | **0.21** | Evaluated by re-transcribing synthesized audio |
> The MOS was evaluated by 12 native Swahili speakers across clarity, tone, and pronunciation.
---
## βοΈ Usage (Python Example)
```python
# Requirements:
# pip install onnxruntime librosa soundfile transformers numpy
# If you want GPU inference: pip install onnxruntime-gpu (and ensure CUDA toolkit is available)
import os
import numpy as np
import onnxruntime
from transformers import AutoTokenizer
import soundfile as sf
TTS_ONNX_MODEL_PATH = "swahili_tts.onnx" # path to your .onnx file
TTS_TOKENIZER_ID = "facebook/mms-tts-swh" # or whichever tokenizer you used
OUTPUT_SAMPLE_RATE = 16000
OUT_DIR = "tts_outputs"
os.makedirs(OUT_DIR, exist_ok=True)
def create_onnx_session(onnx_path: str):
"""Create an ONNX Runtime session using GPU if available, otherwise CPU."""
providers = ["CPUExecutionProvider"]
try:
# prefer CUDA if available
providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
sess = onnxruntime.InferenceSession(onnx_path, providers=providers)
print("Using CUDAExecutionProvider for ONNX Runtime.")
except Exception:
sess = onnxruntime.InferenceSession(onnx_path, providers=["CPUExecutionProvider"])
print("CUDA not available β using CPUExecutionProvider for ONNX Runtime.")
return sess
def generate_speech_from_onnx(text: str,
onnx_session: onnxruntime.InferenceSession,
tokenizer: AutoTokenizer,
out_path: str = None) -> str:
"""
Synthesize speech from text using an ONNX TTS model.
Returns path to WAV file (16kHz, int16).
"""
if not text:
raise ValueError("Empty text provided.")
# Tokenize to numpy inputs (match what the ONNX model expects)
# NOTE: many TTS tokenizers return {"input_ids": np.array(...)} β adapt if your tokenizer differs
inputs = tokenizer(text, return_tensors="np", padding=True)
# Identify ONNX input name (assume first input)
input_name = onnx_session.get_inputs()[0].name
# Prepare ort_inputs dict using names expected by ONNX model
ort_inputs = {input_name: inputs["input_ids"].astype(np.int64)}
# Run ONNX inference
ort_outs = onnx_session.run(None, ort_inputs)
# The model should return a raw waveform or float array convertible to waveform.
# In many single-file TTS ONNX exports the first output is the waveform
audio_array = ort_outs[0]
# Flatten in case it's multi-dim and ensure 1-D waveform
audio_waveform = audio_array.flatten()
# If float waveform in -1..1, convert to int16; else try to coerce to int16
if np.issubdtype(audio_waveform.dtype, np.floating):
# clip then convert
audio_clip = np.clip(audio_waveform, -1.0, 1.0)
audio_int16 = (audio_clip * 32767.0).astype(np.int16)
else:
# if it's already int16-like, cast (safeguard)
audio_int16 = audio_waveform.astype(np.int16)
# Compose output filename
if out_path is None:
out_path = os.path.join(OUT_DIR, f"salama_tts_{abs(hash(text)) & 0xFFFF_FFFF}.wav")
# Save with soundfile (16kHz)
sf.write(out_path, audio_int16, samplerate=OUTPUT_SAMPLE_RATE, subtype="PCM_16")
return out_path
if __name__ == "__main__":
# Example usage
sess = create_onnx_session(TTS_ONNX_MODEL_PATH)
tokenizer = AutoTokenizer.from_pretrained(TTS_TOKENIZER_ID)
example_text = "Karibu kwenye mfumo wa SALAMA unaozalisha sauti asilia ya Kiswahili."
out_wav = generate_speech_from_onnx(example_text, sess, tokenizer)
print("Saved synthesized audio to:", out_wav)
```
**Example Output:**
> *Audio plays:* βKaribu kwenye mfumo wa SALAMA unaozalisha sauti asilia ya Kiswahili.β
---
## β‘ Key Features
- π£οΈ **Natural Swahili speech generation**
- π **Adapted for African tonal variations**
- π **High clarity and rhythm**
- βοΈ **Fast inference with FP16 precision**
- π **Compatible with SALAMA-STT and SALAMA-LLM**
|