PolyAI/minds14
Viewer • Updated • 16.3k • 6.43k • 102
How to use Vickyee/whisper-tiny-minds14-us with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="Vickyee/whisper-tiny-minds14-us") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("Vickyee/whisper-tiny-minds14-us")
model = AutoModelForSpeechSeq2Seq.from_pretrained("Vickyee/whisper-tiny-minds14-us")# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("Vickyee/whisper-tiny-minds14-us")
model = AutoModelForSpeechSeq2Seq.from_pretrained("Vickyee/whisper-tiny-minds14-us")This model is a fine-tuned version of openai/whisper-tiny on the minds14-us (whisper-tiny) dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer | Wer Ortho |
|---|---|---|---|---|---|
| 1.1149 | 1.79 | 100 | 0.5379 | 0.4097 | 0.4176 |
| 0.1705 | 3.57 | 200 | 0.7637 | 0.5762 | 0.5836 |
| 0.166 | 5.36 | 300 | 1.2479 | 0.5384 | 0.5416 |
| 0.2409 | 7.14 | 400 | 1.5261 | 0.6765 | 0.6619 |
| 0.2773 | 8.93 | 500 | 1.8106 | 0.7863 | 0.7816 |
| 0.2715 | 10.71 | 600 | 2.0421 | 0.7739 | 0.7841 |
| 0.2434 | 12.5 | 700 | 2.2664 | 0.7456 | 0.7514 |
| 0.1979 | 14.29 | 800 | 2.1956 | 0.6983 | 0.7039 |
| 0.1843 | 16.07 | 900 | 2.3711 | 0.8182 | 0.8229 |
| 0.1555 | 17.86 | 1000 | 2.4049 | 0.7485 | 0.7569 |
Base model
openai/whisper-tiny
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="Vickyee/whisper-tiny-minds14-us")