PolyAI/minds14
Viewer • Updated • 16.3k • 6.43k • 102
How to use thiagoms7/whisper-tiny-en with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="thiagoms7/whisper-tiny-en") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("thiagoms7/whisper-tiny-en")
model = AutoModelForSpeechSeq2Seq.from_pretrained("thiagoms7/whisper-tiny-en")This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 13 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.0007 | 17.86 | 500 | 0.6431 | 0.3578 | 0.3548 |
| 0.0002 | 35.71 | 1000 | 0.7066 | 0.3664 | 0.3648 |
| 0.0001 | 53.57 | 1500 | 0.7466 | 0.3683 | 0.3672 |
| 0.0001 | 71.43 | 2000 | 0.7762 | 0.3658 | 0.3654 |
Base model
openai/whisper-tiny