HK0712 commited on
Commit
fee1d7c
·
verified ·
1 Parent(s): eabefbc

End of training

Browse files
README.md CHANGED
@@ -16,8 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.4084
20
- - Per: 0.1818
21
 
22
  ## Model description
23
 
@@ -38,23 +38,25 @@ More information needed
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0001
40
  - train_batch_size: 4
41
- - eval_batch_size: 2
42
  - seed: 42
43
  - gradient_accumulation_steps: 8
44
  - total_train_batch_size: 32
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 300
48
- - training_steps: 10000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Per |
54
- |:-------------:|:------:|:----:|:---------------:|:------:|
55
- | 0.6277 | 0.1630 | 3000 | 0.5961 | 0.2501 |
56
- | 0.445 | 0.3259 | 6000 | 0.4649 | 0.2070 |
57
- | 0.3751 | 0.4889 | 9000 | 0.4084 | 0.1818 |
 
 
58
 
59
 
60
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.3619
20
+ - Per: 0.1676
21
 
22
  ## Model description
23
 
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0001
40
  - train_batch_size: 4
41
+ - eval_batch_size: 4
42
  - seed: 42
43
  - gradient_accumulation_steps: 8
44
  - total_train_batch_size: 32
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 300
48
+ - training_steps: 15000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Per |
54
+ |:-------------:|:------:|:-----:|:---------------:|:------:|
55
+ | 0.6277 | 0.1630 | 3000 | 0.5961 | 0.2501 |
56
+ | 0.445 | 0.3259 | 6000 | 0.4649 | 0.2070 |
57
+ | 0.3751 | 0.4889 | 9000 | 0.4084 | 0.1818 |
58
+ | 0.4281 | 0.6518 | 12000 | 0.3907 | 0.1792 |
59
+ | 0.3835 | 0.8148 | 15000 | 0.3619 | 0.1676 |
60
 
61
 
62
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:576651af670236de11263858c90cfc1a69529b94d692a43c4c25354ab18cdce1
3
  size 377718788
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfdf8a9c278164b6dff11f1e9ac6546d2a318cfdc554ea2c1438f216b114d245
3
  size 377718788
runs/Oct21_11-08-09_Ray-PC-X3D/events.out.tfevents.1761016091.Ray-PC-X3D.82751.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1439b70c72681f58727dcca4694ac22f8bf6a0cd4150b938299036302098bc8
3
- size 102038
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:983b7ebf41c679d90cadc4badaeff65a5149d3ce2ed75c773a0056c1f80dea26
3
+ size 113260