openai/whisper-small
This model is a fine-tuned version of openai/whisper-small on the Hanhpt23/GermanMed-full dataset. It achieves the following results on the evaluation set:
- Loss: 0.7961
- Wer: 26.1648
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.6649 | 1.0 | 194 | 0.6657 | 43.7108 |
| 0.3613 | 2.0 | 388 | 0.6721 | 38.5375 |
| 0.2014 | 3.0 | 582 | 0.6927 | 38.8769 |
| 0.1383 | 4.0 | 776 | 0.7546 | 35.0098 |
| 0.1053 | 5.0 | 970 | 0.7698 | 34.0636 |
| 0.086 | 6.0 | 1164 | 0.7729 | 29.7028 |
| 0.059 | 7.0 | 1358 | 0.7985 | 36.8405 |
| 0.0471 | 8.0 | 1552 | 0.8244 | 30.3919 |
| 0.039 | 9.0 | 1746 | 0.8291 | 30.2067 |
| 0.0195 | 10.0 | 1940 | 0.8342 | 33.1379 |
| 0.0149 | 11.0 | 2134 | 0.8184 | 30.7004 |
| 0.0103 | 12.0 | 2328 | 0.8249 | 29.4868 |
| 0.0077 | 13.0 | 2522 | 0.8106 | 33.0351 |
| 0.0039 | 14.0 | 2716 | 0.7991 | 29.0445 |
| 0.0017 | 15.0 | 2910 | 0.8102 | 28.0160 |
| 0.0019 | 16.0 | 3104 | 0.7934 | 26.5247 |
| 0.0014 | 17.0 | 3298 | 0.7996 | 26.7201 |
| 0.0002 | 18.0 | 3492 | 0.7955 | 26.5659 |
| 0.0002 | 19.0 | 3686 | 0.7959 | 26.2059 |
| 0.0003 | 20.0 | 3880 | 0.7961 | 26.1648 |
Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for Hanhpt23/whisper-small-Encode-GermanMed-full
Base model
openai/whisper-small