Kinyarwanda Tacotron 2 TTS (Transfer Learning)
This model is a Text-to-Speech (TTS) engine for Kinyarwanda, trained using the ESPnet2 framework. It was developed using transfer learning from a pre-trained English (LJSpeech) model to achieve high-quality results on a low-resource language.
Model Details
- Architecture: Tacotron 2 (Encoder-Decoder with Attention)
- Vocoder: Parallel WaveGAN (LJSpeech pre-trained)
- Language: Kinyarwanda (RW)
- Hardware: NVIDIA A100 GPU
- Training Epochs: 100
Usage (Python)
To run this model, install the dependencies:
pip install espnet espnet_model_zoo parallel_wavegan typeguard==2.13.3 scipy==1.11.4
import torch
import soundfile as sf
import yaml
import zipfile
import os
from huggingface_hub import hf_hub_download
from espnet2.bin.tts_inference import Text2Speech
from IPython.display import Audio, display
# 1. Download artifacts
repo_id = "Professor/kinyarwanda-tacotron2-espnet"
model_zip = hf_hub_download(repo_id=repo_id, filename="model.zip")
config_path = hf_hub_download(repo_id=repo_id, filename="config.yaml")
stats_path = hf_hub_download(repo_id=repo_id, filename="feats_stats.npz")
# 2. Extract weights
with zipfile.ZipFile(model_zip, 'r') as zip_ref:
zip_ref.extractall("model_weights")
# Search for the .pth file (since names can vary)
pth_file = None
for root, dirs, files in os.walk("model_weights"):
for file in files:
if file.endswith(".pth"):
pth_file = os.path.join(root, file)
break
# 3. Patch config
with open(config_path, 'r') as f:
config = yaml.safe_load(f)
config['normalize_conf']['stats_file'] = stats_path
with open("config_patched.yaml", 'w') as f:
yaml.dump(config, f)
# 4. Initialize
text2speech = Text2Speech.from_pretrained(
model_file=pth_file,
train_config="config_patched.yaml",
vocoder_tag="parallel_wavegan/ljspeech_parallel_wavegan.v1",
device="cuda" if torch.cuda.is_available() else "cpu"
)
# 5. Synthesize
text = "Muraho neza, amakuru yanyu? Kinyarwanda ni ururimi rwiza cyane."
with torch.no_grad():
output = text2speech(text)
wav = output["wav"].cpu().numpy()
sf.write("output.wav", wav, text2speech.fs)
# 6. Show
print(f"✅ Synthesis complete: output.wav")
display(Audio("output.wav", autoplay=True))
Evaluation Results
| Metric | Value | Model Used for Evaluation |
|---|---|---|
| UTMOSv2 | 2.2103 | Average of 400 test samples |
| WER | 46.64% | jq/whisper-large-v3-kin-track-b |
| CER | 12.5% | jq/whisper-large-v3-kin-track-b |
WER & CER Evaluation
To ensure the most accurate possible evaluation of intelligibility, this model's output was processed by jq/whisper-large-v3-kin-track-b.
This specific ASR model is the winner of the Kaggle Kinyarwanda Automatic Speech Recognition Competition (Track B). Utilizing the most accurate ASR model in existence for this language provides a rigorous, objective benchmark for our TTS performance.
Note: The high WER vs. low CER is characteristic of agglutinative languages like Kinyarwanda, where minor phonetic variations result in word-level mismatches.
Citing ESPnet
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
or arXiv:
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 20
Paper for Professor/kinyarwanda-tacotron2-espnet
Evaluation results
- wer on Kinyarwanda Test Setself-reported46.64%
- cer on Kinyarwanda Test Setself-reported12.5%