Whisper tiny AR - BH
This model is a fine-tuned version of openai/whisper-tiny on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set:
- Loss: 0.0344
- Wer: 16.1004
- Cer: 5.1378
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|---|---|---|---|---|---|
| 0.0023 | 1.0 | 157 | 0.0291 | 18.4363 | 5.9503 |
| 0.0007 | 2.0 | 314 | 0.0258 | 19.4172 | 6.2648 |
| 0.0006 | 3.0 | 471 | 0.0290 | 19.4172 | 6.4596 |
| 0.0007 | 4.0 | 628 | 0.0278 | 20.3124 | 6.5744 |
| 0.0007 | 5.0 | 785 | 0.0307 | 21.0409 | 7.0886 |
| 0.0005 | 6.0 | 942 | 0.0311 | 20.6647 | 6.3780 |
| 0.0004 | 7.0 | 1099 | 0.0321 | 21.0028 | 6.7774 |
| 0.0003 | 8.0 | 1256 | 0.0347 | 19.5172 | 6.0479 |
| 0.0002 | 9.0 | 1413 | 0.0356 | 20.1647 | 6.2282 |
| 0.0001 | 10.0 | 1570 | 0.0358 | 18.5078 | 5.7090 |
| 0.0 | 11.0 | 1727 | 0.0370 | 18.4649 | 5.8249 |
| 0.0 | 12.0 | 1884 | 0.0384 | 17.8316 | 5.5625 |
| 0.0 | 13.0 | 2041 | 0.0384 | 17.1984 | 5.4460 |
| 0.0 | 14.0 | 2198 | 0.0384 | 16.7270 | 5.3628 |
| 0.0 | 14.9088 | 2340 | 0.0385 | 16.6841 | 5.3334 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 1
Model tree for Baselhany/Graduation_Project_Whisper_tiny_fine_tune_Quran
Base model
openai/whisper-tiny