danielrosehill's picture
commit
9298658

Speech-to-Text Benchmark Results

Overview

This benchmark evaluates the accuracy of various Speech-to-Text (STT) models on long-form audio transcription. The evaluation is based on a podcast audio file with a professionally transcribed ground truth reference.

Ground Truth: 4,748 words | 24,929 characters

Total Runs Evaluated: 8

Key Findings

Highest Word Accuracy: Local Whisper-Base

  • WER: 17.52%
  • Word Accuracy: 82.48%
  • Provider: Local inference via Buzz
  • Model: whisper-base
  • Punctuation Score: 21.90%

The local Whisper-Base model achieved the highest word accuracy among tested models while recording the lowest punctuation score.

Highest Punctuation Score: Deepgram Nova-3

  • Punctuation Score: 51.17%
  • Context Match Accuracy: 32.33%
  • Total Punctuation: 698 (ref: 688)
  • Word Accuracy: 81.28% (ranked 3rd)

Deepgram Nova-3 recorded the highest punctuation score while maintaining word accuracy within 1.2 percentage points of the top performer.

Detailed Results

Rankings by Word Accuracy

Rank Provider Model WER % CER % Word Accuracy % Punct Score %
1 Local whisper-base 17.52 5.38 82.48 21.90
2 Local whisper-base (auto-detect) 17.52 5.38 82.48 21.90
3 Deepgram nova-3 18.72 7.33 81.28 51.17
4 AssemblyAI best 18.79 6.24 81.21 48.43
5 OpenAI whisper-1 19.27 6.40 80.73 44.44
6 Gladia solaria-1 20.83 6.30 79.17 44.13
7 Speechmatics slam-1-global-english 21.65 7.15 78.35 38.23
8 Local whisper-tiny 22.49 8.39 77.51 18.78

Rankings by Punctuation Accuracy

Rank Provider Model Punct Score % Context Match % Punct Count Word Accuracy %
1 Deepgram nova-3 51.17 32.33 698 / 688 81.28
2 AssemblyAI best 48.43 33.72 791 / 688 81.21
3 OpenAI whisper-1 44.44 34.42 911 / 688 80.73
4 Gladia solaria-1 44.13 22.56 651 / 688 79.17
5 Speechmatics slam-1 38.23 30.00 1003 / 688 78.35
6 Local whisper-base 21.90 13.02 292 / 688 82.48
7 Local whisper-base 21.90 13.02 292 / 688 82.48
8 Local whisper-tiny 18.78 8.60 288 / 688 77.51

Analysis

Local vs Cloud Performance

Local Models:

  • Whisper-Base: 82.48% accuracy (ranked 1st)
  • Whisper-Tiny: 77.51% accuracy (ranked 8th)

Cloud Models:

  • Highest: Deepgram Nova-3 at 81.28% (ranked 3rd)
  • Range: 78.35% - 81.28%

The local Whisper-Base model achieved 1.2 percentage points higher word accuracy than the highest-scoring cloud service.

Language Detection Impact

Run-1 (language specified as "en") and Run-3 (auto-detect) both used whisper-base and achieved identical results (17.52% WER).

Error Type Distribution

Local Whisper-Base (Highest Word Accuracy)

  • Hits: 3,960 (83.4%)
  • Substitutions: 726 (15.3%)
  • Deletions: 62 (1.3%)
  • Insertions: 44 (0.9%)

Deepgram Nova-3 (Highest Cloud Word Accuracy)

  • Hits: 3,919 (82.5%)
  • Substitutions: 615 (13.0%)
  • Deletions: 214 (4.5%)
  • Insertions: 60 (1.3%)

OpenAI Whisper-1

  • Hits: 3,947 (83.1%)
  • Substitutions: 695 (14.6%)
  • Deletions: 106 (2.2%)
  • Insertions: 114 (2.4%)

Character Error Rate (CER) vs Word Error Rate (WER)

All models show CER lower than WER:

  • Lowest CER: 5.38% (Local Whisper-Base)
  • Highest CER: 8.39% (Local Whisper-Tiny)
  • CER/WER ratio: ~0.29 - 0.37

Model Categories

Premium Cloud Services

  • Deepgram Nova-3: 81.28% (WER: 18.72%)
  • AssemblyAI Best: 81.21% (WER: 18.79%)
  • OpenAI Whisper-1: 80.73% (WER: 19.27%)

These three models cluster closely together with less than 1% difference in accuracy.

Specialized Cloud Services

  • Gladia Solaria-1: 79.17% (WER: 20.83%)
  • Speechmatics SLAM-1: 78.35% (WER: 21.65%)

Local Inference

  • Whisper-Base: 82.48% (WER: 17.52%) - Highest word accuracy
  • Whisper-Tiny: 77.51% (WER: 22.49%) - Lowest word accuracy

Punctuation Analysis

Local Model Punctuation Performance

Local Whisper models captured 42-43% of punctuation marks (288-292 out of 688):

  • Not detected: Exclamation marks, quotation marks, colons
  • Periods: 16% detection rate (42 out of 263)
  • Commas: 30% detection rate (31 out of 104)

Cloud Services: Punctuation Patterns

Highest Punctuation Score (Deepgram Nova-3):

  • Count: 698 vs 688 reference
  • 32.33% context match accuracy

Higher Punctuation Counts:

  • Speechmatics: 1,003 marks (+315, 46% above reference)
  • OpenAI Whisper-1: 911 marks (+223, 32% above reference)
  • AssemblyAI: 791 marks (+103, 15% above reference)

Lower Punctuation Count:

  • Gladia: 651 marks (-37, 5% below reference)

Conclusions

  1. Word accuracy and punctuation tradeoff: Local Whisper-Base achieved highest word accuracy (82.48%) but lowest punctuation score (21.90%).

  2. Deepgram Nova-3 performance: Recorded word accuracy of 81.28% (1.2 percentage points below highest) and highest punctuation score (51.17%, 2.3x higher than local models).

  3. Cloud vs local punctuation performance: Cloud services scored 38-51% on punctuation compared to 19-22% for local models.

  4. Model size impact: Whisper-base achieved 4.97 percentage points higher accuracy than whisper-tiny, with similar punctuation scores.

  5. Language detection: Explicit language specification vs auto-detection produced identical results (17.52% WER) for whisper-base on this English audio sample.

Model Selection Considerations

Deepgram Nova-3

  • Punctuation score: 51.17% (highest)
  • Word accuracy: 81.28%
  • Processing time: 3 seconds for 27 minutes
  • Punctuation count: 698 vs 688 reference

Local Whisper-Base

  • Word accuracy: 82.48% (highest)
  • Punctuation score: 21.90% (lowest)
  • Zero marginal cost per transcription
  • Detected 42% of reference punctuation marks

Local Whisper-Base + Cloud Post-Processing

  • Initial transcription: 82.48% word accuracy
  • Requires secondary cloud processing for punctuation

Gladia Solaria-1

  • Word accuracy: 79.17%
  • Punctuation score: 44.13%
  • Punctuation count: 651 (5% below reference)

AssemblyAI Best

  • Word accuracy: 81.21% (highest among cloud services)
  • Punctuation score: 48.43%
  • Punctuation count: 791 (15% above reference)

Technical Notes

  • Evaluation Metric: Word Error Rate (WER) and Character Error Rate (CER) using jiwer library
  • Audio Duration: ~27 minutes (1,637.97 seconds based on Deepgram metadata)
  • Reference Quality: Professional human transcription
  • Test Type: Single audio file, English language, podcast format