whisper-medium-30s

This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6783
  • Cer: 15.6224
  • Wer: 27.6245

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 25
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer Wer
2.1281 1.0 1796 0.6781 22.4463 39.5604
0.8481 2.0 3592 0.6186 22.8864 39.1632
0.6635 3.0 5388 0.5958 22.3883 36.0063
0.5452 4.0 7184 0.5845 19.1954 33.5890
0.4535 5.0 8980 0.5879 20.0356 33.8971
0.3794 6.0 10776 0.5924 17.1588 30.0007
0.3151 7.0 12572 0.5959 16.4786 29.2885
0.2661 8.0 14368 0.6140 16.1885 28.5078
0.2212 9.0 16164 0.6200 16.4586 28.9393
0.1849 10.0 17960 0.6326 16.5526 28.8708
0.1538 11.0 19756 0.6528 15.8364 27.8984
0.1296 12.0 21552 0.6647 15.8504 28.2339
0.1087 13.0 23348 0.6783 15.6224 27.6245
0.0908 14.0 25144 0.7006 15.9265 28.1928
0.0778 15.0 26940 0.7058 15.7464 27.8778
0.0656 16.0 28736 0.7182 15.7324 27.7820
0.0568 17.0 30532 0.7351 15.9005 28.0353
0.0487 18.0 32328 0.7429 15.6484 27.6861

Framework versions

  • Transformers 4.53.3
  • Pytorch 2.7.1+cu118
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NgQuocThai/whisper-medium-30s

Finetuned
(769)
this model

Evaluation results