--- library_name: peft language: - it license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - b-brave-clean metrics: - wer model-index: - name: Whisper Medium results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: b-brave-clean type: b-brave-clean config: default split: test args: default metrics: - type: wer value: 41.260744985673355 name: Wer --- # Whisper Medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the b-brave-clean dataset. It achieves the following results on the evaluation set: - Loss: 0.3776 - Wer: 41.2607 - Cer: 30.2338 - Lr: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Lr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:| | 1.2755 | 1.0 | 251 | 1.0659 | 71.4900 | 45.8103 | 0.0001 | | 0.8309 | 2.0 | 502 | 0.7274 | 63.6103 | 44.2868 | 0.0002 | | 0.5827 | 3.0 | 753 | 0.5902 | 56.1605 | 38.9283 | 0.0002 | | 0.3714 | 4.0 | 1004 | 0.5072 | 53.2951 | 38.7444 | 0.0003 | | 0.1876 | 5.0 | 1255 | 0.4535 | 46.2751 | 32.8080 | 0.0003 | | 0.1278 | 6.0 | 1506 | 0.3975 | 44.8424 | 33.0444 | 0.0002 | | 0.0562 | 7.0 | 1757 | 0.3698 | 36.1032 | 26.3987 | 0.0002 | | 0.0209 | 8.0 | 2008 | 0.4188 | 56.3037 | 46.4145 | 0.0001 | | 0.0123 | 9.0 | 2259 | 0.3916 | 40.8309 | 29.8398 | 0.0001 | | 0.005 | 10.0 | 2510 | 0.3819 | 41.5473 | 30.4965 | 0.0001 | | 0.0031 | 11.0 | 2761 | 0.3779 | 41.8338 | 30.7591 | 0.0000 | | 0.0018 | 12.0 | 3012 | 0.3776 | 41.2607 | 30.2338 | 0.0000 | ### Framework versions - PEFT 0.14.0 - Transformers 4.48.3 - Pytorch 2.2.0 - Datasets 3.2.0 - Tokenizers 0.21.0