AdaCodruta commited on
Commit
8bfffed
·
verified ·
1 Parent(s): ba69ddb

End of training

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Wer
24
  type: wer
25
- value: 80.51073708647708
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 17.0 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.1000
36
- - Wer: 80.5107
37
 
38
  ## Model description
39
 
@@ -59,14 +59,18 @@ The following hyperparameters were used during training:
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - lr_scheduler_warmup_steps: 500
62
- - training_steps: 1000
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
- | Training Loss | Epoch | Step | Validation Loss | Wer |
68
- |:-------------:|:------:|:----:|:---------------:|:-------:|
69
- | 0.4601 | 5.4945 | 1000 | 1.1000 | 80.5107 |
 
 
 
 
70
 
71
 
72
  ### Framework versions
 
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 47.51015670342426
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 17.0 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.6152
36
+ - Wer: 47.5102
37
 
38
  ## Model description
39
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - lr_scheduler_warmup_steps: 500
62
+ - training_steps: 5000
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
68
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|
69
+ | 0.5004 | 5.4945 | 1000 | 1.1554 | 106.2565 |
70
+ | 0.0896 | 10.9890 | 2000 | 1.3810 | 51.0737 |
71
+ | 0.0121 | 16.4835 | 3000 | 1.5371 | 49.9013 |
72
+ | 0.0027 | 21.9780 | 4000 | 1.5901 | 49.1468 |
73
+ | 0.0008 | 27.4725 | 5000 | 1.6152 | 47.5102 |
74
 
75
 
76
  ### Framework versions