mozilla-foundation/common_voice_17_0
Updated • 5.54k • 16
How to use Porameht/whisper-small-th with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="Porameht/whisper-small-th") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("Porameht/whisper-small-th")
model = AutoModelForSpeechSeq2Seq.from_pretrained("Porameht/whisper-small-th")This model is a fine-tuned version of openai/whisper-small on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.2535 | 0.7294 | 1000 | 0.2177 | 73.9061 |
| 0.1453 | 1.4588 | 2000 | 0.1778 | 69.6909 |
| 0.0923 | 2.1882 | 3000 | 0.1648 | 65.8303 |
| 0.0781 | 2.9176 | 4000 | 0.1596 | 64.8535 |
Base model
openai/whisper-small