Whisper Medium (eole)

This is openai/whisper-medium converted to eole format using eole convert --model_dir openai/whisper-medium.

No weights were modified — this is a format conversion only.

Model details

Original model openai/whisper-medium
Parameters 769M
Encoder layers 24
Decoder layers 24
Hidden size 1024
Attention heads 16
Mel bins 80
Vocab size 51,865
License Apache 2.0

Usage

pip install eole[wer]

Transcribe

eole predict \
  -config eval_config.yaml \
  -model_path whisper-medium-eole \
  -src audio_files.txt \
  -output transcriptions.txt \
  -language en \
  -task transcribe \
  -gpu_ranks 0

Evaluation

All evaluations use beam size 5.

Benchmark WER
LibriSpeech test-clean 2.92%

Conversion

eole convert --model_dir openai/whisper-medium --output whisper-medium-eole

Citation

@misc{radford2023robust,
      title={Robust Speech Recognition via Large-Scale Weak Supervision},
      author={Alec Radford and Jong Wook Kim and Tao Xu and Greg Brockman and Christine McLeavey and Ilya Sutskever},
      year={2023},
      eprint={2212.04356},
      archivePrefix={arXiv},
      primaryClass={eess.AS}
}
Downloads last month
11
Safetensors
Model size
0.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for davidmeikle/whisper-medium-eole

Finetuned
(805)
this model

Paper for davidmeikle/whisper-medium-eole