MightyStudent/Egyptian-ASR-MGB-3
Viewer • Updated • 1.16k • 400 • 22
How to use rahafvii/STT-EGY with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="rahafvii/STT-EGY") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("rahafvii/STT-EGY")
model = AutoModelForSpeechSeq2Seq.from_pretrained("rahafvii/STT-EGY")This model is a fine-tuned version of openai/whisper-small on the Egyptian-ASR-MGB-3 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.4615 | 6.8966 | 100 | 0.6496 | 48.0162 |
| 0.072 | 13.7931 | 200 | 0.7459 | 46.3990 |
| 0.0122 | 20.6897 | 300 | 0.8380 | 45.6863 |
| 0.0054 | 27.5862 | 400 | 0.8981 | 45.0764 |
| 0.0033 | 34.4828 | 500 | 0.9322 | 45.2820 |
| 0.0025 | 41.3793 | 600 | 0.9555 | 45.4670 |
| 0.002 | 48.2759 | 700 | 0.9724 | 46.1454 |
| 0.0017 | 55.1724 | 800 | 0.9843 | 45.9467 |
| 0.0016 | 62.0690 | 900 | 0.9916 | 46.0769 |
| 0.0015 | 68.9655 | 1000 | 0.9939 | 46.2893 |
Base model
openai/whisper-small