PolyAI/minds14
Viewer • Updated • 16.3k • 6.37k • 102
How to use agercas/whisper-tiny-us with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="agercas/whisper-tiny-us") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("agercas/whisper-tiny-us")
model = AutoModelForSpeechSeq2Seq.from_pretrained("agercas/whisper-tiny-us")# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("agercas/whisper-tiny-us")
model = AutoModelForSpeechSeq2Seq.from_pretrained("agercas/whisper-tiny-us")This model is a fine-tuned version of openai/whisper-tiny on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.0012 | 17.86 | 500 | 0.7183 | 0.3381 | 0.3312 |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="agercas/whisper-tiny-us")