igarciahuidobro's picture
End of training
f2aca7d verified
metadata
library_name: transformers
language:
  - spa
license: apache-2.0
base_model: rasel35/whisper-base-es-medical-terms
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper Pre Tuned 300 Audios - Nacho v3.0
    results: []

Whisper Pre Tuned 300 Audios - Nacho v3.0

This model is a fine-tuned version of rasel35/whisper-base-es-medical-terms on the 300 audios 1.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3444
  • Wer: 16.1793

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.8732 1.0 18 1.1907 60.6238
0.676 2.0 36 0.4489 21.0526
0.2633 3.0 54 0.4061 17.9337
0.132 4.0 72 0.3804 17.9337
0.0802 5.0 90 0.3507 41.7154
0.0498 6.0 108 0.3660 18.5185
0.036 7.0 126 0.3614 17.3489
0.0213 8.0 144 0.3329 15.9844
0.0152 9.0 162 0.3453 15.7895
0.0042 9.4507 170 0.3444 16.1793

Framework versions

  • Transformers 4.48.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.21.0