mozilla-foundation/common_voice_17_0
Updated • 5.61k • 15
How to use vtking/whisper-small-vi with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="vtking/whisper-small-vi") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("vtking/whisper-small-vi")
model = AutoModelForSpeechSeq2Seq.from_pretrained("vtking/whisper-small-vi")This model is a fine-tuned version of openai/whisper-small on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.0472 | 5.0 | 315 | 0.6506 | 31.8977 |
| 0.0016 | 10.0 | 630 | 0.7401 | 31.2248 |
| 0.0007 | 15.0 | 945 | 0.7637 | 31.3594 |
| 0.0006 | 20.0 | 1260 | 0.7710 | 31.5612 |
Base model
openai/whisper-small