Baselhany's picture
Distillation
d90746e verified
|
raw
history blame
2.5 kB
metadata
library_name: transformers
language:
  - ar
license: apache-2.0
base_model: openai/whisper-base
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper base AR - BA
    results: []

Whisper base AR - BA

This model is a fine-tuned version of openai/whisper-base on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1055
  • Wer: 0.2245

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.1388 1.0 313 0.1088 0.2255
4.005 2.0 626 0.1127 0.2380
3.1681 3.0 939 0.1117 0.2215
2.4917 4.0 1252 0.1089 0.2202
2.1826 5.0 1565 0.1062 0.2146
1.9244 6.0 1878 0.1062 0.2263
1.7281 7.0 2191 0.1032 0.2188
1.5604 8.0 2504 0.1032 0.2193
1.5071 9.0 2817 0.1031 0.2244
1.3603 10.0 3130 0.1033 0.2122
1.2858 11.0 3443 0.1022 0.2134
1.1788 12.0 3756 0.1024 0.2106
1.1271 13.0 4069 0.1015 0.2121
1.0559 14.0 4382 0.1017 0.2098
1.0875 14.9536 4680 0.1013 0.2103

Framework versions

  • Transformers 4.51.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.0