SamagraDataGov commited on
Commit
fb2d63d
·
verified ·
1 Parent(s): 38a4dcb

Model save

Browse files
Files changed (2) hide show
  1. README.md +13 -11
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.6249
21
- - Wer: 73.7172
22
 
23
  ## Model description
24
 
@@ -39,23 +39,25 @@ More information needed
39
  The following hyperparameters were used during training:
40
  - learning_rate: 3.75e-05
41
  - train_batch_size: 16
42
- - eval_batch_size: 4
43
  - seed: 42
44
  - gradient_accumulation_steps: 2
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
- - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 312
49
- - num_epochs: 5
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Wer |
55
- |:-------------:|:------:|:----:|:---------------:|:--------:|
56
- | 2.2169 | 1.2698 | 40 | 1.7869 | 241.5580 |
57
- | 1.0037 | 2.5397 | 80 | 0.8910 | 78.5521 |
58
- | 0.5907 | 3.8095 | 120 | 0.6249 | 73.7172 |
 
 
59
 
60
 
61
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4687
21
+ - Wer: 50.5902
22
 
23
  ## Model description
24
 
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 3.75e-05
41
  - train_batch_size: 16
42
+ - eval_batch_size: 1
43
  - seed: 42
44
  - gradient_accumulation_steps: 2
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: constant
48
+ - lr_scheduler_warmup_steps: 50
49
+ - training_steps: 200
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
55
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
56
+ | 0.6584 | 1.2698 | 40 | 0.5936 | 65.0084 |
57
+ | 0.3575 | 2.5397 | 80 | 0.4684 | 55.3120 |
58
+ | 0.2254 | 3.8095 | 120 | 0.4344 | 50.0 |
59
+ | 0.1486 | 5.0794 | 160 | 0.4562 | 52.1079 |
60
+ | 0.0868 | 6.3492 | 200 | 0.4687 | 50.5902 |
61
 
62
 
63
  ### Framework versions
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa28468ac9b52d0b1f851838fb26d241bc1fa9fe621087a2fedf8be18e932834
3
  size 151099494
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0e9a8a6a2405f1ef747c57eb67d6430089edf114f56b97ad0d448acd182f8f2
3
  size 151099494