google/fleurs
Viewer • Updated • 768k • 57.2k • 402
How to use Sagicc/whisper-large-sr-v2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="Sagicc/whisper-large-sr-v2") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("Sagicc/whisper-large-sr-v2")
model = AutoModelForSpeechSeq2Seq.from_pretrained("Sagicc/whisper-large-sr-v2")This model is a fine-tuned version of openai/whisper-large-v3 on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.1691 | 0.03 | 500 | 0.1776 | 0.2060 | 0.0941 |
| 0.1538 | 0.05 | 1000 | 0.1459 | 0.1743 | 0.0730 |
| 0.1522 | 0.08 | 1500 | 0.1401 | 0.1663 | 0.0689 |
Base model
openai/whisper-large-v3