wh_small_fl_no_langid_fleurs_47716_trial

This model is a fine-tuned version of openai/whisper-small on the fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2928
  • Global Wer: 39.1667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss Global Wer
0.6858 0.0945 50 0.5392 55.3503
0.4892 0.1890 100 0.4350 47.4993
0.3921 0.2836 150 0.3903 44.8364
0.3917 0.3781 200 0.3638 42.1735
0.3594 0.4726 250 0.3443 40.4245
0.3264 0.5671 300 0.3306 40.8863
0.3102 0.6616 350 0.3162 39.0095
0.3000 0.7561 400 0.3102 38.0662
0.2897 0.8507 450 0.3021 38.0466
0.2747 0.9452 500 0.2928 39.1667

Framework versions

  • Transformers 5.0.0.dev0
  • Pytorch 2.9.0+cu126
  • Datasets 3.6.0
  • Tokenizers 0.22.2
Downloads last month
-
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dianavdavidson/wh_small_fl_no_langid_fleurs_47716_trial

Finetuned
(3433)
this model