Baselhany's picture
Distillation
bac5440 verified
|
raw
history blame
2.5 kB
metadata
library_name: transformers
language:
  - ar
license: apache-2.0
base_model: openai/whisper-base
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper base AR - BA
    results: []

Whisper base AR - BA

This model is a fine-tuned version of openai/whisper-base on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0746
  • Wer: 0.2358

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.397 1.0 469 0.0686 0.2067
0.3526 2.0 938 0.0674 0.2345
0.3181 3.0 1407 0.0711 0.2225
0.2689 4.0 1876 0.0724 0.2273
0.2044 5.0 2345 0.0687 0.2366
0.184 6.0 2814 0.0676 0.2165
0.1404 7.0 3283 0.0673 0.2230
0.1461 8.0 3752 0.0657 0.2049
0.1269 9.0 4221 0.0662 0.2055
0.1089 10.0 4690 0.0668 0.2070
0.0835 11.0 5159 0.0639 0.2049
0.0693 12.0 5628 0.0656 0.2152
0.054 13.0 6097 0.0654 0.2160
0.0419 14.0 6566 0.0662 0.2029
0.0342 14.9685 7020 0.0668 0.2180

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1