Automatic Speech Recognition
Transformers
Safetensors
Swahili
English
whisper
Generated from Trainer
Jacaranda commited on
Commit
4fca98d
·
verified ·
1 Parent(s): ee8e977

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -161,12 +161,12 @@ The fine-tuned model demonstrates superior performance in:
161
  The following charts illustrate the model's training progress and performance improvements:
162
 
163
  ### Word Error Rate (WER) Progress
164
- ![WER Progress](./Assets/wer.png)
165
 
166
  The WER chart shows the steady improvement in transcription accuracy throughout the training process. Starting from approximately 21.6% WER at step 500, the model achieves its best performance of 14.7% WER by step 8000, demonstrating consistent learning and convergence.
167
 
168
  ### Learning Rate Schedule
169
- ![Learning Rate](./Assets/lr.png)
170
 
171
  The learning rate follows a cosine annealing schedule, starting at 1e-05 and gradually decreasing over the 8000 training steps. This schedule helps ensure stable training and prevents overfitting while allowing the model to fine-tune effectively.
172
 
 
161
  The following charts illustrate the model's training progress and performance improvements:
162
 
163
  ### Word Error Rate (WER) Progress
164
+ ![WER Progress](./Media/wer.png)
165
 
166
  The WER chart shows the steady improvement in transcription accuracy throughout the training process. Starting from approximately 21.6% WER at step 500, the model achieves its best performance of 14.7% WER by step 8000, demonstrating consistent learning and convergence.
167
 
168
  ### Learning Rate Schedule
169
+ ![Learning Rate](./Media/lr.png)
170
 
171
  The learning rate follows a cosine annealing schedule, starting at 1e-05 and gradually decreasing over the 8000 training steps. This schedule helps ensure stable training and prevents overfitting while allowing the model to fine-tune effectively.
172