Graduation_Project_Distil_Whisper_base3
This model is a fine-tuned version of Baselhany/Graduation_Project_Distil_Whisper_base3 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1698
- Wer: 0.3720
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 1.4782 | 1.0 | 520 | 0.1344 | 0.4069 |
| 1.0405 | 2.0 | 1040 | 0.1495 | 0.4004 |
| 0.5482 | 3.0 | 1560 | 0.1573 | 0.3436 |
| 0.4041 | 4.0 | 2080 | 0.1587 | 0.4126 |
| 0.3115 | 5.0 | 2600 | 0.1569 | 0.3798 |
| 0.2612 | 6.0 | 3120 | 0.1515 | 0.4272 |
| 0.187 | 7.0 | 3640 | 0.1577 | 0.3917 |
| 0.1596 | 8.0 | 4160 | 0.1538 | 0.4334 |
| 0.1465 | 9.0 | 4680 | 0.1497 | 0.3771 |
| 0.1149 | 10.0 | 5200 | 0.1506 | 0.4192 |
| 0.0935 | 11.0 | 5720 | 0.1465 | 0.3974 |
| 0.0849 | 12.0 | 6240 | 0.1483 | 0.3979 |
| 0.0686 | 13.0 | 6760 | 0.1472 | 0.4237 |
| 0.0533 | 14.0 | 7280 | 0.1483 | 0.4251 |
| 0.0382 | 14.9726 | 7785 | 0.1498 | 0.4302 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- -
Model tree for Baselhany/Graduation_Project_Distil_Whisper_base3
Unable to build the model tree, the base model loops to the model itself. Learn more.