Whisper-Small En-10m
This model is a fine-tuned version of openai/whisper-small on the librispeech dataset. It achieves the following results on the evaluation set:
- Loss: 0.6197
- Wer: 3.4591
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 600
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.6138 | 18.1818 | 100 | 0.8199 | 3.4102 |
| 0.5068 | 36.3636 | 200 | 0.7678 | 3.4367 |
| 0.3871 | 54.5455 | 300 | 0.6937 | 3.4632 |
| 0.3179 | 72.7273 | 400 | 0.6497 | 3.4530 |
| 0.277 | 90.9091 | 500 | 0.6270 | 3.4591 |
| 0.2598 | 109.0909 | 600 | 0.6197 | 3.4591 |
Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 1
Model tree for Pageee/FT-English-10me
Base model
openai/whisper-smallDataset used to train Pageee/FT-English-10me
Evaluation results
- Wer on librispeechself-reported3.459