Hcask commited on
Commit
02acc07
·
verified ·
1 Parent(s): 3c27c93

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: ntu-spml/distilhubert
5
  tags:
6
  - generated_from_trainer
7
  datasets:
@@ -9,7 +9,7 @@ datasets:
9
  metrics:
10
  - accuracy
11
  model-index:
12
- - name: distilhubert-finetuned-gtzan
13
  results:
14
  - task:
15
  name: Audio Classification
@@ -23,18 +23,18 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.782608695652174
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
- # distilhubert-finetuned-gtzan
33
 
34
- This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 1.2172
37
- - Accuracy: 0.7826
38
 
39
  ## Model description
40
 
@@ -60,28 +60,33 @@ The following hyperparameters were used during training:
60
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_ratio: 0.1
63
- - num_epochs: 10
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 2.1308 | 1.0 | 25 | 2.1309 | 0.0870 |
71
- | 1.9034 | 2.0 | 50 | 1.9650 | 0.3478 |
72
- | 1.6085 | 3.0 | 75 | 1.7543 | 0.5217 |
73
- | 1.4083 | 4.0 | 100 | 1.6225 | 0.5217 |
74
- | 1.4712 | 5.0 | 125 | 1.4741 | 0.6957 |
75
- | 1.1667 | 6.0 | 150 | 1.3947 | 0.6087 |
76
- | 1.0986 | 7.0 | 175 | 1.3320 | 0.7391 |
77
- | 1.0781 | 8.0 | 200 | 1.2441 | 0.7391 |
78
- | 0.96 | 9.0 | 225 | 1.2146 | 0.7826 |
79
- | 0.9224 | 10.0 | 250 | 1.2172 | 0.7826 |
 
 
 
 
 
80
 
81
 
82
  ### Framework versions
83
 
84
- - Transformers 4.48.3
85
- - Pytorch 2.5.1+cu124
86
- - Datasets 3.3.2
87
- - Tokenizers 0.21.0
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: facebook/hubert-base-ls960
5
  tags:
6
  - generated_from_trainer
7
  datasets:
 
9
  metrics:
10
  - accuracy
11
  model-index:
12
+ - name: hubert-base-ls960-finetuned-gtzan
13
  results:
14
  - task:
15
  name: Audio Classification
 
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.7391304347826086
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
+ # hubert-base-ls960-finetuned-gtzan
33
 
34
+ This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the GTZAN dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.1234
37
+ - Accuracy: 0.7391
38
 
39
  ## Model description
40
 
 
60
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 15
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 1.4277 | 1.0 | 25 | 1.5627 | 0.4783 |
71
+ | 1.4946 | 2.0 | 50 | 1.4727 | 0.5217 |
72
+ | 1.051 | 3.0 | 75 | 1.3207 | 0.6087 |
73
+ | 1.0897 | 4.0 | 100 | 1.3614 | 0.6522 |
74
+ | 1.1461 | 5.0 | 125 | 1.3143 | 0.5652 |
75
+ | 0.6919 | 6.0 | 150 | 1.1131 | 0.6087 |
76
+ | 0.7273 | 7.0 | 175 | 1.4138 | 0.6522 |
77
+ | 0.5955 | 8.0 | 200 | 1.2106 | 0.6957 |
78
+ | 0.4823 | 9.0 | 225 | 1.1681 | 0.6087 |
79
+ | 0.5178 | 10.0 | 250 | 1.1616 | 0.6522 |
80
+ | 0.4635 | 11.0 | 275 | 0.9685 | 0.7826 |
81
+ | 0.4622 | 12.0 | 300 | 0.9625 | 0.7826 |
82
+ | 0.3048 | 13.0 | 325 | 1.0364 | 0.7391 |
83
+ | 0.1576 | 14.0 | 350 | 1.0571 | 0.7391 |
84
+ | 0.1876 | 15.0 | 375 | 1.1234 | 0.7391 |
85
 
86
 
87
  ### Framework versions
88
 
89
+ - Transformers 4.51.3
90
+ - Pytorch 2.6.0+cu124
91
+ - Datasets 3.5.1
92
+ - Tokenizers 0.21.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ddb2a8a6e9b8f7f3e58693ee4a676d0c30f15defb90f78d6652ba16694b2b0e4
3
  size 378309148
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fe42a9d1e85f31bb8e442114da193f143829b349d258615acc5bcf42b9da162
3
  size 378309148
runs/May05_13-17-17_e4172518e05d/events.out.tfevents.1746451950.e4172518e05d.869.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b42c3630062af6c4768dbcd0af96e19f6ba4349d2281e910be5ca0df35100cd7
3
- size 22827
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eb7ffc3bcd07c0b9462b0afdde9e55af776564152d807d210539c7961694f87
3
+ size 27315