mozilla-foundation/common_voice_13_0
Updated • 2.36k • 3
How to use seiching/whisper-medium-seiching with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="seiching/whisper-medium-seiching") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("seiching/whisper-medium-seiching")
model = AutoModelForSpeechSeq2Seq.from_pretrained("seiching/whisper-medium-seiching")This model is a fine-tuned version of openai/whisper-medium on the Common Voice 13 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.1472 | 0.69 | 500 | 0.1579 | 36.6425 | 36.4544 |
| 0.0545 | 1.38 | 1000 | 0.1685 | 37.4093 | 37.4725 |
| 0.0227 | 2.06 | 1500 | 0.1751 | 37.5544 | 37.9118 |
| 0.0262 | 2.75 | 2000 | 0.1885 | 37.9689 | 37.4925 |
| 0.0203 | 3.44 | 2500 | 0.2042 | 37.2228 | 36.7938 |
| 0.0123 | 4.13 | 3000 | 0.2065 | 38.3834 | 37.9916 |
| 0.0121 | 4.81 | 3500 | 0.2065 | 37.6373 | 37.7720 |
| 0.0151 | 5.5 | 4000 | 0.2083 | 37.9482 | 37.6922 |