Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,27 +1,75 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
|
| 10 |
-
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
- name: median_entity_cer
|
| 15 |
-
dtype: float64
|
| 16 |
-
splits:
|
| 17 |
-
- name: test
|
| 18 |
-
num_bytes: 6088590
|
| 19 |
-
num_examples: 50
|
| 20 |
-
download_size: 6084136
|
| 21 |
-
dataset_size: 6088590
|
| 22 |
-
configs:
|
| 23 |
-
- config_name: default
|
| 24 |
-
data_files:
|
| 25 |
-
- split: test
|
| 26 |
-
path: data/test-*
|
| 27 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- automatic-speech-recognition
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- medical
|
| 9 |
+
- asr
|
| 10 |
+
- entity-cer
|
| 11 |
+
- benchmark
|
| 12 |
+
size_categories:
|
| 13 |
+
- n<1K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
---
|
| 15 |
+
|
| 16 |
+
# MultiMed Hard — Medical ASR Benchmark
|
| 17 |
+
|
| 18 |
+
Entity-aware medical ASR benchmark — 50 hard rows from medical lectures and interviews.
|
| 19 |
+
|
| 20 |
+
## Source
|
| 21 |
+
|
| 22 |
+
Derived from [leduckhai/MultiMed](https://huggingface.co/datasets/leduckhai/MultiMed) EN test split (4,751 rows, MIT license). YouTube medical channels — lectures, interviews, podcasts, documentaries. Transcripts are human-reviewed.
|
| 23 |
+
|
| 24 |
+
## Preparation
|
| 25 |
+
|
| 26 |
+
1. Filter: audio ≥ 2s, ≤ 29s, text ≥ 20 chars
|
| 27 |
+
2. Casing filter: drop all-caps / all-lowercase rows
|
| 28 |
+
3. Whisper CER filter: drop rows with whisper-large-v3 CER > 10% (bad GT alignment)
|
| 29 |
+
4. Gemini Flash entity tagging (6 medical categories)
|
| 30 |
+
5. Keep rows with ≥ 1 entity (entity text ≥ 5 chars)
|
| 31 |
+
6. 3-model difficulty filter (whisper-large-v3, canary-1b-v2, Voxtral-Mini) with whisper-english normalization
|
| 32 |
+
7. Exclude median entity CER > 0.9
|
| 33 |
+
8. LLM validation (Gemini Flash): drop non-medical content, generic entities, GT typos
|
| 34 |
+
9. Top-50 by median entity CER
|
| 35 |
+
|
| 36 |
+
## Entity categories
|
| 37 |
+
|
| 38 |
+
- **drug** — drug or medication names (brand or INN)
|
| 39 |
+
- **condition** — diagnoses, diseases, syndromes, disorders
|
| 40 |
+
- **procedure** — surgical, diagnostic, or therapeutic procedures
|
| 41 |
+
- **anatomy** — anatomical structures, organs, body regions
|
| 42 |
+
- **biomarker** — lab tests, biomarkers, genes, proteins, molecular markers
|
| 43 |
+
- **organisation** — hospitals, regulatory bodies, pharmaceutical companies
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
## Columns
|
| 47 |
+
|
| 48 |
+
- `audio` — 16kHz WAV
|
| 49 |
+
- `text` — ground truth transcript (human-reviewed)
|
| 50 |
+
- `entities` — JSON array of tagged medical entities with `text`, `category`, `char_start`, `char_end`
|
| 51 |
+
- `difficulty_rank` — 1 = hardest
|
| 52 |
+
- `median_entity_cer` — median entity CER across 3 difficulty-filter models
|
| 53 |
+
|
| 54 |
+
## Leaderboard (16 models, sorted by Entity CER)
|
| 55 |
+
|
| 56 |
+
| # | Model | WER | CER | Entity CER | Results |
|
| 57 |
+
|---|---|---|---|---|---|
|
| 58 |
+
| 1 | scribe-v2 | 0.100 | 0.060 | 0.134 | [results](https://huggingface.co/datasets/Trelis/eval-scribe-v2-multimed-hard-20260408-1933) |
|
| 59 |
+
| 2 | MultiMed-ST (whisper-small-en) | 0.115 | 0.075 | 0.160 | [results](https://huggingface.co/datasets/Trelis/eval-whisper-small-english-multimed-hard-20260408-1935) |
|
| 60 |
+
| 3 | gemini-2.5-pro | 0.105 | 0.062 | 0.167 | [results](https://huggingface.co/datasets/Trelis/eval-gemini-2.5-pro-multimed-hard-20260408-1933) |
|
| 61 |
+
| 4 | ursa-2-enhanced | 0.105 | 0.060 | 0.196 | [results](https://huggingface.co/datasets/Trelis/eval-ursa-2-enhanced-multimed-hard-20260408-1933) |
|
| 62 |
+
| 5 | whisper-large-v3 | 0.085 | 0.052 | 0.197 | [results](https://huggingface.co/datasets/Trelis/eval-whisper-large-v3-multimed-hard-20260408-1932) |
|
| 63 |
+
| 6 | nova-3 | 0.120 | 0.069 | 0.199 | [results](https://huggingface.co/datasets/Trelis/eval-nova-3-multimed-hard-20260408-1934) |
|
| 64 |
+
| 7 | whisper-large-v3-turbo | 0.093 | 0.056 | 0.218 | [results](https://huggingface.co/datasets/Trelis/eval-whisper-large-v3-turbo-multimed-hard-20260408-1931) |
|
| 65 |
+
| 8 | whisper-small | 0.133 | 0.075 | 0.228 | [results](https://huggingface.co/datasets/Trelis/eval-whisper-small-multimed-hard-20260408-1933) |
|
| 66 |
+
| 9 | parakeet-tdt-0.6b-v3 | 0.159 | 0.101 | 0.233 | [results](https://huggingface.co/datasets/Trelis/eval-parakeet-tdt-0.6b-v3-multimed-hard-20260408-1930) |
|
| 67 |
+
| 10 | universal-3-pro | 0.125 | 0.100 | 0.234 | [results](https://huggingface.co/datasets/Trelis/eval-universal-3-pro-multimed-hard-20260408-1933) |
|
| 68 |
+
| 11 | canary-1b-v2 | 0.150 | 0.093 | 0.255 | [results](https://huggingface.co/datasets/Trelis/eval-canary-1b-v2-multimed-hard-20260408-1931) |
|
| 69 |
+
| 12 | whisper-v3 (fireworks) | 0.130 | 0.090 | 0.261 | [results](https://huggingface.co/datasets/Trelis/eval-whisper-v3-multimed-hard-20260408-1936) |
|
| 70 |
+
| 13 | Voxtral-Mini-3B-2507 | 0.109 | 0.075 | 0.273 | [results](https://huggingface.co/datasets/Trelis/eval-Voxtral-Mini-3B-2507-multimed-hard-20260408-1931) |
|
| 71 |
+
| 14 | medasr | 0.292 | 0.158 | 0.278 | [results](https://huggingface.co/datasets/Trelis/eval-medasr-multimed-hard-20260408-1932) |
|
| 72 |
+
| 15 | whisper-tiny | 0.236 | 0.144 | 0.360 | [results](https://huggingface.co/datasets/Trelis/eval-whisper-tiny-multimed-hard-20260408-1930) |
|
| 73 |
+
| 16 | whisper-base | 0.221 | 0.156 | 0.379 | [results](https://huggingface.co/datasets/Trelis/eval-whisper-base-multimed-hard-20260408-1930) |
|
| 74 |
+
|
| 75 |
+
Evaluated with [Trelis Studio](https://studio.trelis.com), whisper-english normalization.
|