whisper-medium-ctc-salt-v4
This model is a fine-tuned version of openai/whisper-medium on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3685
- Wer: 0.4950
- Cer: 0.1125
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.1
- training_steps: 7500
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|---|---|---|---|---|---|
| 1.8100 | 0.0692 | 500 | 1.4483 | 0.9447 | 0.3121 |
| 1.1358 | 0.1384 | 1000 | 0.7928 | 0.8266 | 0.2115 |
| 0.9544 | 0.2076 | 1500 | 0.6407 | 0.7104 | 0.1784 |
| 0.9246 | 0.2768 | 2000 | 0.5626 | 0.6579 | 0.1615 |
| 0.8689 | 0.3460 | 2500 | 0.5141 | 0.6114 | 0.1470 |
| 0.8005 | 0.4152 | 3000 | 0.4850 | 0.5991 | 0.1431 |
| 0.7604 | 0.4844 | 3500 | 0.4607 | 0.5729 | 0.1341 |
| 0.7502 | 0.5536 | 4000 | 0.4346 | 0.5531 | 0.1290 |
| 0.7427 | 0.6228 | 4500 | 0.4137 | 0.5387 | 0.1247 |
| 0.7206 | 0.6920 | 5000 | 0.4008 | 0.5212 | 0.1189 |
| 0.6932 | 0.7612 | 5500 | 0.3868 | 0.5056 | 0.1156 |
| 0.7040 | 0.8304 | 6000 | 0.3777 | 0.4989 | 0.1136 |
| 0.6987 | 0.8997 | 6500 | 0.3729 | 0.5001 | 0.1140 |
| 0.6623 | 0.9689 | 7000 | 0.3692 | 0.4945 | 0.1122 |
| 0.6484 | 1.0381 | 7500 | 0.3685 | 0.4950 | 0.1125 |
Framework versions
- Transformers 5.2.0
- Pytorch 2.10.0+cu130
- Datasets 4.6.0
- Tokenizers 0.22.2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for akera/whisper-medium-ctc-salt-v4
Base model
openai/whisper-medium