Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string

whisper-small-tigrinya-r-32

This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Cer: 16.2664
  • Loss: 0.4443
  • Wer: 44.0975

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10

Training results

Training Loss Epoch Step Cer Validation Loss Wer
1.7067 0.6680 250 20.2379 0.4194 53.5989
1.2662 1.3340 500 16.9180 0.3535 47.0435
1.2181 2.0 750 15.7577 0.3320 44.4949
1.0008 2.6680 1000 15.0975 0.3233 42.9323
0.7737 3.3340 1250 15.4744 0.3279 43.3595
0.8006 4.0 1500 14.7349 0.3221 42.7859
0.6077 4.6680 1750 15.3030 0.3387 42.8725
0.4587 5.3340 2000 15.5312 0.3529 43.2012
0.4760 6.0 2250 15.4100 0.3548 42.8665
0.3458 6.6680 2500 16.5764 0.3845 44.9252
0.2320 7.3340 2750 16.5760 0.4112 44.6443
0.2221 8.0 3000 16.3563 0.4165 44.2589
0.1678 8.6680 3250 16.8778 0.4360 45.0865
0.1415 9.3340 3500 16.4335 0.4432 44.2230
0.1373 10.0 3750 16.2664 0.4443 44.0975

Framework versions

  • PEFT 0.18.1
  • Transformers 5.2.0
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aregay01/whisper-small-tigrinya-r-32

Adapter
(221)
this model