Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string

whisper-small-tigrinya-r-28

This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Cer: 16.3883
  • Loss: 0.4323
  • Wer: 44.4351

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10

Training results

Training Loss Epoch Step Cer Validation Loss Wer
1.7237 0.6680 250 21.0719 0.4191 54.7073
1.2838 1.3340 500 16.5156 0.3613 46.5236
1.2354 2.0 750 15.6233 0.3347 44.5935
1.0210 2.6680 1000 15.6542 0.3255 44.0587
0.7947 3.3340 1250 15.2447 0.3284 43.1325
0.8205 4.0 1500 15.1736 0.3213 42.9502
0.6328 4.6680 1750 15.3297 0.3362 43.1205
0.4849 5.3340 2000 15.3243 0.3516 42.9173
0.5075 6.0 2250 15.2106 0.3511 42.5409
0.3795 6.6680 2500 15.8178 0.3769 43.8704
0.2632 7.3340 2750 16.1000 0.4020 43.9690
0.2492 8.0 3000 15.9667 0.4070 44.1125
0.1995 8.6680 3250 16.2856 0.4243 44.4471
0.1714 9.3340 3500 16.4018 0.4308 44.5397
0.1666 10.0 3750 16.3883 0.4323 44.4351

Framework versions

  • PEFT 0.18.1
  • Transformers 5.2.0
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aregay01/whisper-small-tigrinya-r-28

Adapter
(223)
this model