mozilla-foundation/common_voice_13_0
Updated • 2.3k • 3
How to use BKat/whisper-small-bg with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="BKat/whisper-small-bg") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("BKat/whisper-small-bg")
model = AutoModelForSpeechSeq2Seq.from_pretrained("BKat/whisper-small-bg")This model is a fine-tuned version of openai/whisper-small on the mozilla-foundation/common_voice_13_0 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.0787 | 2.78 | 500 | 0.3445 | 31.2999 | 24.2365 |
| 0.0145 | 5.56 | 1000 | 0.3983 | 30.2504 | 23.2648 |
Base model
openai/whisper-small