PolyAI/minds14
Viewer • Updated • 16.3k • 6.52k • 102
How to use ZhaoxiZheng/whisper-tiny with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="ZhaoxiZheng/whisper-tiny") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("ZhaoxiZheng/whisper-tiny")
model = AutoModelForSpeechSeq2Seq.from_pretrained("ZhaoxiZheng/whisper-tiny")This model is a fine-tuned version of openai/whisper-tiny on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 1.3521 | 1.7857 | 50 | 0.5871 | 0.4127 | 0.3849 |
| 0.2839 | 3.5714 | 100 | 0.4864 | 0.3356 | 0.3300 |
| 0.0983 | 5.3571 | 150 | 0.5188 | 0.3387 | 0.3270 |
| 0.0285 | 7.1429 | 200 | 0.5651 | 0.3282 | 0.3164 |
| 0.0064 | 8.9286 | 250 | 0.5842 | 0.3152 | 0.3123 |
| 0.0021 | 10.7143 | 300 | 0.6164 | 0.3313 | 0.3312 |
| 0.0013 | 12.5 | 350 | 0.6319 | 0.3263 | 0.3259 |
| 0.0009 | 14.2857 | 400 | 0.6441 | 0.3245 | 0.3235 |
| 0.0007 | 16.0714 | 450 | 0.6542 | 0.3251 | 0.3241 |
| 0.0006 | 17.8571 | 500 | 0.6637 | 0.3263 | 0.3276 |
Base model
openai/whisper-tiny