Automatic Speech Recognition
Transformers
Safetensors
Panjabi
whisper
Generated from Trainer
Eval Results

Whisper-base

This model is a fine-tuned version of openai/whisper-base on various datasets. It achieves the following results on the evaluation set:

  • Loss: 0.2729
  • Wer: 38.4713

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1851 1.2804 1000 0.2017 47.7530
0.1354 2.5608 2000 0.1628 40.4972
0.1025 3.8412 3000 0.1542 38.4193
0.0558 5.1216 4000 0.1625 37.7120
0.0438 6.4020 5000 0.1783 38.2217
0.0311 7.6825 6000 0.1950 38.3855
0.0196 8.9629 7000 0.2156 38.3647
0.0092 10.2433 8000 0.2482 38.6430
0.0059 11.5237 9000 0.2635 38.6898
0.0051 12.8041 10000 0.2729 38.4713

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.1.0+cu118
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
9
Safetensors
Model size
72.6M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aipanjab/whisper-base-pa

Finetuned
(580)
this model

Datasets used to train aipanjab/whisper-base-pa

Evaluation results