Whisper Amharic Small v2

Fine-tuned Whisper Small model for Amharic speech recognition using Mozilla Common Voice dataset.

Performance

  • WER: 69% (improved from 77% in v1)
  • Dataset: Mozilla Common Voice Amharic
  • Base Model: openai/whisper-small

Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration

processor = WhisperProcessor.from_pretrained("chappM/whisper-amharic-small-v2")
model = WhisperForConditionalGeneration.from_pretrained("chappM/whisper-amharic-small-v2")

Optimized for Amharic speech recognition tasks.

Downloads last month
36
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for chappM/whisper-amharic-small-v2

Finetuned
(3171)
this model

Dataset used to train chappM/whisper-amharic-small-v2

Space using chappM/whisper-amharic-small-v2 1

Evaluation results