Joserzapata commited on
Commit
6b712cf
·
1 Parent(s): d7cddce

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 0.3347107438016529
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,9 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.6329
35
- - Wer Ortho: 0.3374
36
- - Wer: 0.3347
37
 
38
  ## Model description
39
 
@@ -53,9 +53,11 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 1e-05
56
- - train_batch_size: 16
57
- - eval_batch_size: 16
58
  - seed: 42
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: constant_with_warmup
61
  - lr_scheduler_warmup_steps: 50
@@ -65,7 +67,7 @@ The following hyperparameters were used during training:
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
67
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
68
- | 0.001 | 17.86 | 500 | 0.6329 | 0.3374 | 0.3347 |
69
 
70
 
71
  ### Framework versions
 
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 0.34710743801652894
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.6160
35
+ - Wer Ortho: 0.3498
36
+ - Wer: 0.3471
37
 
38
  ## Model description
39
 
 
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 1e-05
56
+ - train_batch_size: 4
57
+ - eval_batch_size: 4
58
  - seed: 42
59
+ - gradient_accumulation_steps: 4
60
+ - total_train_batch_size: 16
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: constant_with_warmup
63
  - lr_scheduler_warmup_steps: 50
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
70
+ | 0.0007 | 17.86 | 500 | 0.6160 | 0.3498 | 0.3471 |
71
 
72
 
73
  ### Framework versions