openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the pphuc25/EngMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2969
  • Wer: 19.8132

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.7724 1.0 2268 0.7573 43.9665
0.439 2.0 4536 0.7693 43.3383
0.2876 3.0 6804 0.8212 32.4148
0.1882 4.0 9072 0.8900 25.3134
0.128 5.0 11340 0.9573 26.0614
0.0812 6.0 13608 1.0126 21.0453
0.0635 7.0 15876 1.0505 22.0544
0.0471 8.0 18144 1.0961 22.6690
0.032 9.0 20412 1.1188 21.7300
0.0238 10.0 22680 1.1409 22.2326
0.0228 11.0 24948 1.1615 21.0589
0.0165 12.0 27216 1.2063 21.2099
0.0073 13.0 29484 1.2233 20.5515
0.0088 14.0 31752 1.2199 20.3353
0.0037 15.0 34020 1.2605 20.1151
0.0021 16.0 36288 1.2565 19.8531
0.0044 17.0 38556 1.2666 20.6187
0.0006 18.0 40824 1.2941 19.7616
0.0001 19.0 43092 1.2905 19.9700
0.0001 20.0 45360 1.2969 19.8132

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Hanhpt23/whisper-small-engmed-v2

Finetuned
(3314)
this model