whisper-small-SplitEndMovie

This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9753
  • Cer: 23.5657
  • Wer: 38.6392

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 8
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer Wer
0.8164 1.0 2620 0.7881 34.9387 55.4173
0.5435 2.0 5240 0.7639 29.3675 48.3657
0.3796 3.0 7860 0.7789 27.2382 43.7783
0.2621 4.0 10480 0.8139 25.8186 41.7910
0.1822 5.0 13100 0.8588 24.1368 39.4903
0.1278 6.0 15720 0.9034 25.3257 40.1683
0.0927 7.0 18340 0.9459 24.0187 39.2986
0.0718 8.0 20960 0.9753 23.5657 38.6392

Framework versions

  • Transformers 4.56.1
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.22.0
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NgQuocThai/whisper-small-SplitEndMovie

Finetuned
(3170)
this model