YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
IonGrozea/whisper-small_ro-80mel
Fine-tuned from openai/whisper-small on the ALL_RO_80MEL Romanian dataset.
Training setup
- Base model:
openai/whisper-small - Dataset:
ALL_RO_80MEL(train / validation / test) - Objective: ASR (Romanian transcription)
- Trainer: Hugging Face
Seq2SeqTrainer - Checkpoint saved: best WER on validation (
final_best/)
Training results (per evaluation)
| epoch | step | train_loss | eval_loss | eval_wer | eval_cer |
|---|---|---|---|---|---|
| 1.0000 | 6475 | 0.1062 | 2.8610 | 2.9901 | |
| 2.0000 | 12950 | 0.0852 | 3.8649 | 3.7047 | |
| 3.0000 | 19425 | 0.0792 | 4.4320 | 4.2204 | |
| 4.0000 | 25900 | 0.0857 | 4.3639 | 4.3786 | |
| 4.0000 | 25900 | 0.1062 | 2.6983 | 2.9504 |
Metrics columns:
- train_loss โ training loss logged near this eval step
- eval_loss โ validation loss from
trainer.evaluate()- eval_wer โ validation WER on a subset (lower is better)
- eval_cer โ validation CER on a subset (lower is better)
The raw HF Trainer logs are also stored in:
training_log.jsonltrainer_state.jsoneval_results.jsondata_results.json
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support