Whisper Small (eole)

This is openai/whisper-small converted to eole format using eole convert --model_dir openai/whisper-small.

No weights were modified — this is a format conversion only.

Model details

Original model openai/whisper-small
Parameters 244M
Encoder layers 12
Decoder layers 12
Hidden size 768
Attention heads 12
Mel bins 80
Vocab size 51,865
License Apache 2.0

Usage

pip install eole[wer]

Transcribe

eole predict \
  -config eval_config.yaml \
  -model_path whisper-small-eole \
  -src audio_files.txt \
  -output transcriptions.txt \
  -language en \
  -task transcribe \
  -gpu_ranks 0

Evaluation

All evaluations use beam size 5.

Benchmark WER
LibriSpeech test-clean 3.30%

Conversion

eole convert --model_dir openai/whisper-small --output whisper-small-eole

Citation

@misc{radford2023robust,
      title={Robust Speech Recognition via Large-Scale Weak Supervision},
      author={Alec Radford and Jong Wook Kim and Tao Xu and Greg Brockman and Christine McLeavey and Ilya Sutskever},
      year={2023},
      eprint={2212.04356},
      archivePrefix={arXiv},
      primaryClass={eess.AS}
}
Downloads last month
13
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for davidmeikle/whisper-small-eole

Finetuned
(3296)
this model

Paper for davidmeikle/whisper-small-eole