multimed-hard / README.md
ronanarraig's picture
Upload README.md with huggingface_hub
64a91ba verified
metadata
license: mit
task_categories:
  - automatic-speech-recognition
language:
  - en
tags:
  - medical
  - asr
  - entity-cer
  - benchmark
size_categories:
  - n<1K

MultiMed Hard — Medical ASR Benchmark

Entity-aware medical ASR benchmark — 50 hard rows from medical lectures and interviews.

Prepared by Trelis Research. Watch more on Youtube or inquire about our custom voice AI (ASR/TTS) services here.

Source

Derived from leduckhai/MultiMed EN test split (4,751 rows, MIT license). YouTube medical channels — lectures, interviews, podcasts, documentaries. Transcripts are human-reviewed.

Preparation

  1. Filter: audio ≥ 2s, ≤ 29s, text ≥ 20 chars
  2. Casing filter: drop all-caps / all-lowercase rows
  3. Whisper CER filter: drop rows with whisper-large-v3 CER > 10% (bad GT alignment)
  4. Gemini Flash entity tagging (6 medical categories)
  5. Keep rows with ≥ 1 entity (entity text ≥ 5 chars)
  6. 3-model difficulty filter (whisper-large-v3, canary-1b-v2, Voxtral-Mini) with whisper-english normalization
  7. Exclude median entity CER > 0.9
  8. LLM validation (Gemini Flash): drop non-medical content, generic entities, GT typos
  9. Top-50 by median entity CER

Entity categories

  • drug — drug or medication names (brand or INN)
  • condition — diagnoses, diseases, syndromes, disorders
  • procedure — surgical, diagnostic, or therapeutic procedures
  • anatomy — anatomical structures, organs, body regions
  • biomarker — lab tests, biomarkers, genes, proteins, molecular markers
  • organisation — hospitals, regulatory bodies, pharmaceutical companies

Columns

  • audio — 16kHz WAV
  • text — ground truth transcript (human-reviewed)
  • entities — JSON array of tagged medical entities with text, category, char_start, char_end
  • difficulty_rank — 1 = hardest
  • median_entity_cer — median entity CER across 3 difficulty-filter models

Leaderboard (16 models, sorted by Entity CER)

# Model WER CER Entity CER Results
1 scribe-v2 0.100 0.060 0.134 results
2 MultiMed-ST (whisper-small-en) 0.115 0.075 0.160 results
3 gemini-2.5-pro 0.105 0.062 0.167 results
4 ursa-2-enhanced 0.105 0.060 0.196 results
5 whisper-large-v3 0.085 0.052 0.197 results
6 nova-3 0.120 0.069 0.199 results
7 whisper-large-v3-turbo 0.093 0.056 0.218 results
8 whisper-small 0.133 0.075 0.228 results
9 parakeet-tdt-0.6b-v3 0.159 0.101 0.233 results
10 universal-3-pro 0.125 0.100 0.234 results
11 canary-1b-v2 0.150 0.093 0.255 results
12 whisper-v3 (fireworks) 0.130 0.090 0.261 results
13 Voxtral-Mini-3B-2507 0.109 0.075 0.273 results
14 medasr 0.251 0.145 0.278 results
15 whisper-tiny 0.236 0.144 0.360 results
16 whisper-base 0.221 0.156 0.379 results

Evaluated with Trelis Studio, whisper-english normalization.