| fast-whisper-finetuning/Whisper_w_PEFT(https://github.com/Vaibhavs10/fast-whisper-finetuning/blob/main/Whisper_w_PEFT.ipynb) | |
| -GPU | |
| RTX 4090D * 1卡 | |
| -Dataset | |
| TingChen-ppmc/whisper-small-Shanghai | |
| trainer.train() #per_device_train_batch_size=4, # OutOfMemoryError: CUDA out of memory. Tried to allocate 60.00 MiB. GPU | |
| -Train Result | |
| [100/100 03:36, Epoch 0/1] | |
| Step Training Loss Validation Loss | |
| 100 1.971000 1.035924 | |
| TrainOutput(global_step=100, training_loss=1.9710490417480468, metrics={'train_runtime': 217.3877, 'train_samples_per_second': 1.84, 'train_steps_per_second': 0.46, 'total_flos': 8.5832810496e+17, 'train_loss': 1.9710490417480468, 'epoch': 0.15060240963855423}) | |
| -Eva Result | |
| wer=98.68189806678383 and normalized_wer=103.27573794096472 | |
| {'eval/wer': 98.68189806678383, 'eval/normalized_wer': 103.27573794096472} |