openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the pphuc25/FrenchMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6120
  • Wer: 44.5015

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
1.2076 1.0 215 1.2220 65.9091
0.7142 2.0 430 1.3108 57.4047
0.3943 3.0 645 1.3278 56.9648
0.227 4.0 860 1.4398 46.9208
0.1701 5.0 1075 1.4437 46.3343
0.1137 6.0 1290 1.5016 48.8270
0.0942 7.0 1505 1.6104 47.0674
0.0627 8.0 1720 1.5725 48.1672
0.0484 9.0 1935 1.6233 45.5279
0.0454 10.0 2150 1.6325 49.7801
0.0276 11.0 2365 1.6426 45.7478
0.0246 12.0 2580 1.6756 44.7214
0.0189 13.0 2795 1.6642 46.1144
0.0135 14.0 3010 1.6463 43.1818
0.0036 15.0 3225 1.6292 44.3548
0.0066 16.0 3440 1.6133 45.0147
0.0022 17.0 3655 1.6154 45.8211
0.0015 18.0 3870 1.6086 45.5279
0.0002 19.0 4085 1.6105 45.2346
0.0005 20.0 4300 1.6120 44.5015

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Hanhpt23/whisper-small-Encod-frenchmed

Finetuned
(3443)
this model