aconeil commited on
Commit
d93c66c
·
verified ·
1 Parent(s): 303f437

End of training

Browse files
Files changed (1) hide show
  1. README.md +30 -35
README.md CHANGED
@@ -23,7 +23,7 @@ model-index:
23
  metrics:
24
  - name: Wer
25
  type: wer
26
- value: 0.9895470383275261
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,9 +33,9 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the audiofolder dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 2.7910
37
- - Wer: 0.9895
38
- - Cer: 0.7631
39
 
40
  ## Model description
41
 
@@ -54,48 +54,43 @@ More information needed
54
  ### Training hyperparameters
55
 
56
  The following hyperparameters were used during training:
57
- - learning_rate: 1e-05
58
- - train_batch_size: 6
59
  - eval_batch_size: 8
60
  - seed: 42
61
  - gradient_accumulation_steps: 2
62
- - total_train_batch_size: 12
63
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
  - lr_scheduler_type: linear
65
- - lr_scheduler_warmup_steps: 10
66
- - num_epochs: 10
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
72
- |:-------------:|:------:|:----:|:---------------:|:------:|:------:|
73
- | 7.4397 | 0.4762 | 10 | 4.8025 | 1.2195 | 0.9223 |
74
- | 4.9419 | 0.9524 | 20 | 3.6993 | 1.0 | 1.0 |
75
- | 3.9437 | 1.4286 | 30 | 3.2743 | 1.0 | 0.9985 |
76
- | 3.6167 | 1.9048 | 40 | 3.0382 | 1.0 | 0.9909 |
77
- | 3.2024 | 2.3810 | 50 | 2.9654 | 1.0 | 0.9977 |
78
- | 3.0842 | 2.8571 | 60 | 2.9326 | 1.0 | 0.9162 |
79
- | 2.9728 | 3.3333 | 70 | 2.8471 | 1.0 | 0.9924 |
80
- | 3.1389 | 3.8095 | 80 | 2.8372 | 1.0 | 0.9962 |
81
- | 2.8151 | 4.2857 | 90 | 2.7709 | 1.0 | 0.9353 |
82
- | 2.8628 | 4.7619 | 100 | 2.7346 | 1.0 | 0.9436 |
83
- | 2.9721 | 5.2381 | 110 | 2.7073 | 0.9930 | 0.8751 |
84
- | 2.8984 | 5.7143 | 120 | 2.7487 | 0.9895 | 0.7982 |
85
- | 2.7447 | 6.1905 | 130 | 2.7735 | 0.9895 | 0.7768 |
86
- | 2.761 | 6.6667 | 140 | 2.7856 | 0.9895 | 0.7685 |
87
- | 2.8382 | 7.1429 | 150 | 2.7890 | 0.9895 | 0.7639 |
88
- | 2.6345 | 7.6190 | 160 | 2.7905 | 0.9895 | 0.7631 |
89
- | 2.8314 | 8.0952 | 170 | 2.7905 | 0.9895 | 0.7616 |
90
- | 2.4773 | 8.5714 | 180 | 2.7911 | 0.9895 | 0.7624 |
91
- | 2.8527 | 9.0476 | 190 | 2.7908 | 0.9895 | 0.7631 |
92
- | 3.0524 | 9.5238 | 200 | 2.7913 | 0.9895 | 0.7631 |
93
- | 2.6813 | 10.0 | 210 | 2.7910 | 0.9895 | 0.7631 |
94
 
95
 
96
  ### Framework versions
97
 
98
- - Transformers 4.52.2
99
  - Pytorch 2.8.0+cu128
100
  - Datasets 3.0.0
101
- - Tokenizers 0.21.0
 
23
  metrics:
24
  - name: Wer
25
  type: wer
26
+ value: 1.0
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the audiofolder dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: nan
37
+ - Wer: 1.0
38
+ - Cer: 1.0
39
 
40
  ## Model description
41
 
 
54
  ### Training hyperparameters
55
 
56
  The following hyperparameters were used during training:
57
+ - learning_rate: 0.0003
58
+ - train_batch_size: 8
59
  - eval_batch_size: 8
60
  - seed: 42
61
  - gradient_accumulation_steps: 2
62
+ - total_train_batch_size: 16
63
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
  - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_steps: 300
66
+ - num_epochs: 100
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
72
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
73
+ | 3.6561 | 6.25 | 100 | 3.1439 | 1.0 | 0.9246 |
74
+ | 5.5678 | 12.5 | 200 | 4.0590 | 1.1777 | 0.7807 |
75
+ | 5.583 | 18.75 | 300 | 4.0602 | 1.1533 | 0.7807 |
76
+ | 5.2971 | 25.0 | 400 | 4.0607 | 1.1498 | 0.7845 |
77
+ | 5.6771 | 31.25 | 500 | 4.0620 | 1.1568 | 0.7791 |
78
+ | 0.0 | 37.5 | 600 | nan | 1.0 | 1.0 |
79
+ | 0.0 | 43.75 | 700 | nan | 1.0 | 1.0 |
80
+ | 0.0 | 50.0 | 800 | nan | 1.0 | 1.0 |
81
+ | 0.0 | 56.25 | 900 | nan | 1.0 | 1.0 |
82
+ | 0.0 | 62.5 | 1000 | nan | 1.0 | 1.0 |
83
+ | 0.0 | 68.75 | 1100 | nan | 1.0 | 1.0 |
84
+ | 0.0 | 75.0 | 1200 | nan | 1.0 | 1.0 |
85
+ | 0.0 | 81.25 | 1300 | nan | 1.0 | 1.0 |
86
+ | 0.0 | 87.5 | 1400 | nan | 1.0 | 1.0 |
87
+ | 0.0 | 93.75 | 1500 | nan | 1.0 | 1.0 |
88
+ | 0.0 | 100.0 | 1600 | nan | 1.0 | 1.0 |
 
 
 
 
 
89
 
90
 
91
  ### Framework versions
92
 
93
+ - Transformers 4.57.1
94
  - Pytorch 2.8.0+cu128
95
  - Datasets 3.0.0
96
+ - Tokenizers 0.22.1