benchmarkTTS / README.md
Rcarvalo's picture
Add audio samples (worst_to_best) for VibeVoice FR + Qwen3-TTS FR
fe40e4a verified
metadata
license: mit
tags:
  - tts
  - benchmark
  - text-to-speech
  - french
  - english
language:
  - fr
  - en

TTS Model Benchmarks

Benchmark results for various TTS models on French and English, with audio samples sorted worst-to-best by WER.

Summary Results (French - 500 SIWIS phrases)

Model Samples WER Mean WER Median RTF Real-time?
Qwen3-TTS 1.7B 500 23.4% 14.3% 1.300 No (0/500)
VibeVoice 0.5B (FT) 500 35.0% 22.9% 0.416 Yes (500/500)
CeSAMe CSM-1B 4-bit 150 69.6% 66.7% 3.246 No

Summary Results (English)

Model Samples WER Mean WER Median RTF
CeSAMe CSM-1B 4-bit 150 8.7% 0.0% 2.857
Qwen3-TTS 1.7B 300 12.9% 0.0% 1.690

Audio Samples

Each model folder contains a worst_to_best/ directory with all generated audio files ranked by WER (worst first). File format: NNN_werXXX_sampleid.wav

  • vibevoice_french/worst_to_best/ - 500 audio samples
  • qwen3_tts_french/worst_to_best/ - 500 audio samples

Structure

vibevoice_french/
  results.csv            # Full benchmark results
  worst_to_best/         # Audio ranked by WER (001 = worst)
qwen3_tts_french/
  results.csv
  worst_to_best/
cesame_unsloth_baseline/
  results.csv            # EN + FR baseline (no audio)
qwen3_tts_english/
  results.csv            # EN baseline (no audio)

Evaluation Methodology

  • WER: Word Error Rate via OpenAI Whisper API transcription
  • RTF: Generation time / audio duration (< 1.0 = real-time capable)
  • Benchmark: 500 SIWIS French phrases (seed=42, 15 < len < 300, deduplicated)

Models

  • VibeVoice-Realtime-0.5B (Microsoft) - Fine-tuned on SIWIS French → Rcarvalo/vibevoice
  • CeSAMe CSM-1B (Sesame) - Unsloth 4-bit quantization
  • Qwen3-TTS-12Hz-1.7B (Alibaba) - Base model