whisper-small-serbian-v2

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2957
  • Wer: 14.0126

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1250
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.304 0.9234 500 0.2736 21.4309
0.2213 1.8458 1000 0.2192 17.9671
0.1528 2.7682 1500 0.2149 18.1085
0.0876 3.6907 2000 0.2211 16.4412
0.0598 4.6131 2500 0.2254 15.9645
0.0335 5.5355 3000 0.2396 15.7533
0.0172 6.4580 3500 0.2494 15.8825
0.0109 7.3804 4000 0.2568 15.4303
0.0067 8.3029 4500 0.2684 15.4198
0.0038 9.2253 5000 0.2710 14.9571
0.0036 10.1477 5500 0.2738 14.9327
0.0019 11.0702 6000 0.2772 14.7406
0.0019 11.9935 6500 0.2744 14.8262
0.0018 12.9160 7000 0.2770 14.5102
0.0011 13.8384 7500 0.2784 14.3688
0.0005 14.7608 8000 0.2814 14.2762
0.0002 15.6833 8500 0.2870 14.1767
0.0002 16.6057 9000 0.2904 14.1523
0.0001 17.5282 9500 0.2934 14.0563
0.0001 18.4506 10000 0.2944 13.9742
0.0001 19.3730 10500 0.2957 14.0126

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
1
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for samil24/whisper-small-serbian-v2

Finetuned
(3251)
this model