mozilla-foundation/common_voice_13_0
Updated • 2.29k • 3
How to use hyunseop/whisper-base-dv with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="hyunseop/whisper-base-dv") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("hyunseop/whisper-base-dv")
model = AutoModelForSpeechSeq2Seq.from_pretrained("hyunseop/whisper-base-dv")This model is a fine-tuned version of openai/whisper-small on the Common Voice 13 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.6636 | 1.6287 | 500 | 0.6133 | 147.8838 | 118.5812 |
Base model
openai/whisper-small