lkhagvaa12 commited on
Commit
fe86c28
·
verified ·
1 Parent(s): 6223114

End of training

Browse files
Files changed (2) hide show
  1. README.md +9 -6
  2. generation_config.json +1 -1
README.md CHANGED
@@ -18,9 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.6691
22
- - Wer Ortho: 73.1332
23
- - Wer: 73.1148
24
 
25
  ## Model description
26
 
@@ -46,19 +46,22 @@ The following hyperparameters were used during training:
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: constant_with_warmup
48
  - lr_scheduler_warmup_steps: 50
49
- - training_steps: 500
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
55
  |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
56
- | 0.8185 | 2.4752 | 500 | 0.6691 | 73.1332 | 73.1148 |
 
 
 
57
 
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.52.4
62
  - Pytorch 2.6.0+cu124
63
  - Datasets 3.6.0
64
  - Tokenizers 0.21.2
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.4842
22
+ - Wer Ortho: 60.9132
23
+ - Wer: 60.9029
24
 
25
  ## Model description
26
 
 
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: constant_with_warmup
48
  - lr_scheduler_warmup_steps: 50
49
+ - training_steps: 2000
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
55
  |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
56
+ | 1.0053 | 1.5198 | 500 | 0.6511 | 72.9011 | 72.8777 |
57
+ | 0.7337 | 3.0395 | 1000 | 0.5380 | 65.5752 | 65.5485 |
58
+ | 0.5557 | 4.5593 | 1500 | 0.4971 | 63.5066 | 63.4855 |
59
+ | 0.3969 | 6.0790 | 2000 | 0.4842 | 60.9132 | 60.9029 |
60
 
61
 
62
  ### Framework versions
63
 
64
+ - Transformers 4.51.3
65
  - Pytorch 2.6.0+cu124
66
  - Datasets 3.6.0
67
  - Tokenizers 0.21.2
generation_config.json CHANGED
@@ -235,5 +235,5 @@
235
  "transcribe": 50359,
236
  "translate": 50358
237
  },
238
- "transformers_version": "4.52.4"
239
  }
 
235
  "transcribe": 50359,
236
  "translate": 50358
237
  },
238
+ "transformers_version": "4.51.3"
239
  }