Dataset Viewer
Auto-converted to Parquet Duplicate
text
string
category
string
spoken_form
string
reference_asr
string
reference_audio
audio
The reaction requires ≥500 μL of H₂O₂ (30% w/v) at 37±2°C, yielding ≈2.5×10⁶ CFU/mL — a 3× improvement over the control at pH 7.4±0.1.
symbol_expansion
The reaction requires greater than or equal to five hundred microlitres of H 2 O 2, thirty percent weight by volume, at thirty seven plus or minus two degrees Celsius, yielding approximately two point five times ten to the sixth C F U per millilitre, a three times improvement over the control at p H seven point four plus or minus zero point one.
fires greater than or equal to 500 microlitres of H2O2, 30% weight by volume, at 37 ± 2°C, yielding approximately 2.5 x 10⁶ CFU per millilitre, a 3 times improvement over the control at pH 7.4 ± 0.1.
Dr. K.R. Liu, M.D., Ph.D., F.A.C.C., presented in Vol. XII of IEEE Trans. on Bioinformatics (pp. 89–104), arguing that the rpoB gene's S531L mutation remains the gold standard vs. the newer katG assay endorsed by CLSI.
abbreviation_reading
Doctor K R Liu, M D, P H D, F A C C, presented in Volume twelve of I triple E Transactions on Bioinformatics, pages eighty nine to one hundred four, arguing that the R P O B gene's S 531 L mutation remains the gold standard versus the newer kat G assay endorsed by C L S I.
Dr. K. Orlew, MD, PhD, FACC, presented in Volume 12 of IEEE Transactions on Bioinformatics, pages 89-104, arguing that the RPOB genes S531L mutation remains the gold standard versus the newer CATG assay endorsed by CLSI.
Saoirse Ní Chaoilfhinn from Dún Laoghaire benchmarked deepseek-ai/DeepSeek-R1-0528 against Qwen/Qwen3-235B-A22B on MMLU-Pro, while her colleague Siobhán Ó Riain from Inis Meáin evaluated mistralai/Mixtral-8x22B-Instruct-v0.1 on GSM8K.
proper_nouns
Seersha Nee Keelin from Doon Leery benchmarked deepseek AI slash DeepSeek R1 0528 against Kwen slash Kwen 3 235B A 22B on M M L U Pro, while her colleague Shihvawn Oh Ree-an from Inish Maan evaluated mistral AI slash Mixtral 8 times 22B Instruct v zero point one on G S M 8 K.
Kirsia Ní Chíoláin from Dún Laoghaire benchmarked DeepSeq AI/DeepSeq R1-0528 against QEN QEN3-235B/A22B on MMLU Pro, while her colleague Siobhán O'Rean from Inishman evaluated Mistral AI/Mistral 8x22B Instruct V0.1 on GSM8K.
He started snoring — zzz, zzz — right in the middle of the lecture. "Psst," she hissed, nudging him. "Wake up!" He jolted awake. "Huh? What... what happened?" She sighed. "Shhh — just pay attention." Outside, the wind went whoosh through the open window, and somewhere far off... drip, drip, drip.
prosody_and_punctuation
He started snoring, zzz, zzz, right in the middle of the lecture. Psst, she hissed, nudging him. Wake up! He jolted awake. Huh? What... what happened? She sighed. Shhh, just pay attention. Outside, the wind went whoosh through the open window, and somewhere far off... drip, drip, drip.
He started snoring, zzz, zzz, right in the middle of the lecture. Psst, she hissed, nudging him. Wake up! He jolted awake. Huh? What? What happened? She sighed. Shh, just pay attention. Outside, the wind went whoosh through the open window, and somewhere far off, drip, drip, drip.

Tricky TTS

A benchmark dataset for evaluating text-to-speech (TTS) models on linguistically and typographically challenging English text. Each row is designed to stress-test a specific failure mode that separates capable TTS systems from weaker ones.

Built with Trelis Studio

Evaluations were run using Trelis Studio. For custom voice model development, see Trelis Voice AI Services.

Evaluation methodology

  • Round-trip ASR CER: TTS model generates audio → Whisper transcribes back → CER vs human reference
  • MOS (naturalness): UTMOS score on generated audio

Dataset

4 rows covering four challenge categories:

Category What it tests
symbol_expansion Unicode symbols, units, operators — , μL, ±, ×10⁶
abbreviation_reading Acronyms, initialisms, roman numerals, dotted titles — IEEE, Vol. XII, F.A.C.C.
proper_nouns Irish/Celtic names, HuggingFace model paths, brand names
prosody_and_punctuation Em-dashes, ellipses, onomatopoeia, rhythm — zzz, Psst, whoosh

Columns: text, category, spoken_form (normalised reference transcription), reference_audio (human voice recording, webm), reference_asr (transcription of reference audio by openai/whisper-large-v3 via Trelis Studio ASR eval).

Usage

from datasets import load_dataset
ds = load_dataset("Trelis/tricky-tts-public", split="train")
for row in ds:
    print(row["category"], row["text"])

Leaderboard

Evaluated with round-trip ASR (Whisper large-v3 human reference, fireworks/whisper-v3 scoring). MOS from UTMOS. Human reference audio scored at 4.22 MOS.

Rank Model MOS ↑ CER ↓ Eval dataset
1 Gemini Pro TTS 4.227 0.112 Trelis/tricky-tts-gemini-pro-tts
2 GPT-4o mini TTS 4.330 0.121 Trelis/tricky-tts-gpt-4o-mini-tts
3 Gemini Flash TTS 4.184 0.122 Trelis/tricky-tts-gemini-flash-tts
4 ElevenLabs 4.273 0.192 Trelis/tricky-tts-elevenlabs
5 Kokoro 4.511 0.209 Trelis/tricky-tts-kokoro
6 Orpheus 4.152 0.229 Trelis/tricky-tts-orpheus
7 Cartesia Sonic-3 4.019 0.259 Trelis/tricky-tts-cartesia-sonic-3
8 Piper (en-gb) 3.777 0.323 Trelis/tricky-tts-piper-en-gb
9 Mistral Voxtral-Mini 4.289 0.569 Trelis/tricky-tts-mistral
10 Chatterbox 4.100 0.583 Trelis/tricky-tts-chatterbox

License

MIT

Downloads last month
6