Baselhany's picture
Distillation
9bfb9a2 verified
|
raw
history blame
2.5 kB
metadata
library_name: transformers
language:
  - ar
license: apache-2.0
base_model: openai/whisper-base
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper base AR - BA
    results: []

Whisper base AR - BA

This model is a fine-tuned version of openai/whisper-base on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1260
  • Wer: 0.2865

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
78.8449 1.0 313 0.1892 0.7483
23.7046 2.0 626 0.1465 0.4188
13.1378 3.0 939 0.1347 0.3632
8.2072 4.0 1252 0.1312 0.3285
5.8166 5.0 1565 0.1316 0.2937
4.5461 6.0 1878 0.1339 0.2916
3.8785 7.0 2191 0.1276 0.2838
3.1975 8.0 2504 0.1253 0.2762
2.8784 9.0 2817 0.1240 0.2881
2.6303 10.0 3130 0.1238 0.2719
2.481 11.0 3443 0.1225 0.2670
2.2994 12.0 3756 0.1221 0.2641
2.0863 13.0 4069 0.1214 0.2672
2.0235 14.0 4382 0.1213 0.2638
2.015 14.9536 4680 0.1213 0.2626

Framework versions

  • Transformers 4.51.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.0