google/fleurs
Viewer • Updated • 768k • 57.7k • 402
How to use BrainTheos/whisper-base-ln with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="BrainTheos/whisper-base-ln") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("BrainTheos/whisper-base-ln")
model = AutoModelForSpeechSeq2Seq.from_pretrained("BrainTheos/whisper-base-ln")This model is a fine-tuned version of openai/whisper-base on the Fleurs dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.0081 | 21.0 | 1000 | 0.6218 | 29.8710 |
| 0.0016 | 42.01 | 2000 | 0.6865 | 25.1188 |
| 0.0009 | 63.01 | 3000 | 0.7152 | 24.9151 |
| 0.0007 | 85.0 | 4000 | 0.7265 | 25.0509 |