Audio
Collection
6 items • Updated
# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("vadhri/whisper-tiny")
model = AutoModelForSpeechSeq2Seq.from_pretrained("vadhri/whisper-tiny")This model is a fine-tuned version of openai/whisper-tiny on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.0006 | 17.86 | 500 | 0.6785 | 0.3644 | 0.3570 |
Base model
openai/whisper-tiny
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="vadhri/whisper-tiny")