exp_002_base_lora
This model is a fine-tuned version of openai/whisper-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5781
- Wer: 54.5829
- Wer Ortho: 57.7728
- Cer: 17.2107
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Wer Ortho | Cer |
|---|---|---|---|---|---|---|
| 2.364 | 0.05 | 500 | 1.3775 | 89.3018 | 90.4210 | 30.2665 |
| 1.9822 | 0.1 | 1000 | 1.0647 | 79.1535 | 81.1369 | 25.7959 |
| 1.773 | 0.15 | 1500 | 0.8929 | 71.8646 | 74.4727 | 22.9622 |
| 1.6612 | 1.044 | 2000 | 0.8104 | 68.2874 | 70.8852 | 20.9447 |
| 1.5587 | 1.094 | 2500 | 0.7594 | 65.4784 | 68.2362 | 20.1626 |
| 1.5063 | 1.144 | 3000 | 0.7234 | 63.3287 | 66.1393 | 19.5334 |
| 1.4318 | 2.038 | 3500 | 0.6917 | 61.0404 | 64.1920 | 19.0537 |
| 1.3879 | 2.088 | 4000 | 0.6697 | 59.6591 | 62.7346 | 18.5726 |
| 1.3831 | 2.138 | 4500 | 0.6524 | 58.5128 | 61.6924 | 17.9122 |
| 1.3231 | 3.032 | 5000 | 0.6385 | 59.3316 | 62.5270 | 19.6542 |
| 1.3064 | 3.082 | 5500 | 0.6270 | 58.2777 | 61.2398 | 18.6335 |
| 1.2766 | 3.132 | 6000 | 0.6143 | 56.9425 | 60.0897 | 18.0292 |
| 1.255 | 4.026 | 6500 | 0.6063 | 57.1315 | 60.3056 | 18.3886 |
| 1.2481 | 4.076 | 7000 | 0.5986 | 56.2203 | 59.3298 | 17.9156 |
| 1.2161 | 4.126 | 7500 | 0.5905 | 55.6115 | 58.8274 | 17.8844 |
| 1.2109 | 5.02 | 8000 | 0.5860 | 55.6535 | 58.8233 | 17.7709 |
| 1.2183 | 5.07 | 8500 | 0.5834 | 55.3218 | 58.5202 | 17.4484 |
| 1.1953 | 5.12 | 9000 | 0.5781 | 54.5829 | 57.7728 | 17.2107 |
| 1.1721 | 6.014 | 9500 | 0.5760 | 54.8012 | 57.9347 | 17.4963 |
| 1.1816 | 6.064 | 10000 | 0.5753 | 54.7718 | 57.8641 | 17.3142 |
Framework versions
- PEFT 0.12.0
- Transformers 4.48.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
- Downloads last month
- 172
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for ihanif/exp_002_base_lora
Base model
openai/whisper-base