End of training
Browse files
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
license: apache-2.0
|
| 4 |
-
base_model:
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
datasets:
|
|
@@ -9,7 +9,7 @@ datasets:
|
|
| 9 |
metrics:
|
| 10 |
- accuracy
|
| 11 |
model-index:
|
| 12 |
-
- name:
|
| 13 |
results:
|
| 14 |
- task:
|
| 15 |
name: Audio Classification
|
|
@@ -23,18 +23,18 @@ model-index:
|
|
| 23 |
metrics:
|
| 24 |
- name: Accuracy
|
| 25 |
type: accuracy
|
| 26 |
-
value: 0.
|
| 27 |
---
|
| 28 |
|
| 29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 30 |
should probably proofread and complete it, then remove this comment. -->
|
| 31 |
|
| 32 |
-
#
|
| 33 |
|
| 34 |
-
This model is a fine-tuned version of [
|
| 35 |
It achieves the following results on the evaluation set:
|
| 36 |
-
- Loss: 1.
|
| 37 |
-
- Accuracy: 0.
|
| 38 |
|
| 39 |
## Model description
|
| 40 |
|
|
@@ -60,28 +60,33 @@ The following hyperparameters were used during training:
|
|
| 60 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 61 |
- lr_scheduler_type: linear
|
| 62 |
- lr_scheduler_warmup_ratio: 0.1
|
| 63 |
-
- num_epochs:
|
| 64 |
- mixed_precision_training: Native AMP
|
| 65 |
|
| 66 |
### Training results
|
| 67 |
|
| 68 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
| 69 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
| 70 |
-
|
|
| 71 |
-
| 1.
|
| 72 |
-
| 1.
|
| 73 |
-
| 1.
|
| 74 |
-
| 1.
|
| 75 |
-
|
|
| 76 |
-
|
|
| 77 |
-
|
|
| 78 |
-
| 0.
|
| 79 |
-
| 0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
|
| 82 |
### Framework versions
|
| 83 |
|
| 84 |
-
- Transformers 4.
|
| 85 |
-
- Pytorch 2.
|
| 86 |
-
- Datasets 3.
|
| 87 |
-
- Tokenizers 0.21.
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
license: apache-2.0
|
| 4 |
+
base_model: facebook/hubert-base-ls960
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
datasets:
|
|
|
|
| 9 |
metrics:
|
| 10 |
- accuracy
|
| 11 |
model-index:
|
| 12 |
+
- name: hubert-base-ls960-finetuned-gtzan
|
| 13 |
results:
|
| 14 |
- task:
|
| 15 |
name: Audio Classification
|
|
|
|
| 23 |
metrics:
|
| 24 |
- name: Accuracy
|
| 25 |
type: accuracy
|
| 26 |
+
value: 0.7391304347826086
|
| 27 |
---
|
| 28 |
|
| 29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 30 |
should probably proofread and complete it, then remove this comment. -->
|
| 31 |
|
| 32 |
+
# hubert-base-ls960-finetuned-gtzan
|
| 33 |
|
| 34 |
+
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the GTZAN dataset.
|
| 35 |
It achieves the following results on the evaluation set:
|
| 36 |
+
- Loss: 1.1234
|
| 37 |
+
- Accuracy: 0.7391
|
| 38 |
|
| 39 |
## Model description
|
| 40 |
|
|
|
|
| 60 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 61 |
- lr_scheduler_type: linear
|
| 62 |
- lr_scheduler_warmup_ratio: 0.1
|
| 63 |
+
- num_epochs: 15
|
| 64 |
- mixed_precision_training: Native AMP
|
| 65 |
|
| 66 |
### Training results
|
| 67 |
|
| 68 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
| 69 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
| 70 |
+
| 1.4277 | 1.0 | 25 | 1.5627 | 0.4783 |
|
| 71 |
+
| 1.4946 | 2.0 | 50 | 1.4727 | 0.5217 |
|
| 72 |
+
| 1.051 | 3.0 | 75 | 1.3207 | 0.6087 |
|
| 73 |
+
| 1.0897 | 4.0 | 100 | 1.3614 | 0.6522 |
|
| 74 |
+
| 1.1461 | 5.0 | 125 | 1.3143 | 0.5652 |
|
| 75 |
+
| 0.6919 | 6.0 | 150 | 1.1131 | 0.6087 |
|
| 76 |
+
| 0.7273 | 7.0 | 175 | 1.4138 | 0.6522 |
|
| 77 |
+
| 0.5955 | 8.0 | 200 | 1.2106 | 0.6957 |
|
| 78 |
+
| 0.4823 | 9.0 | 225 | 1.1681 | 0.6087 |
|
| 79 |
+
| 0.5178 | 10.0 | 250 | 1.1616 | 0.6522 |
|
| 80 |
+
| 0.4635 | 11.0 | 275 | 0.9685 | 0.7826 |
|
| 81 |
+
| 0.4622 | 12.0 | 300 | 0.9625 | 0.7826 |
|
| 82 |
+
| 0.3048 | 13.0 | 325 | 1.0364 | 0.7391 |
|
| 83 |
+
| 0.1576 | 14.0 | 350 | 1.0571 | 0.7391 |
|
| 84 |
+
| 0.1876 | 15.0 | 375 | 1.1234 | 0.7391 |
|
| 85 |
|
| 86 |
|
| 87 |
### Framework versions
|
| 88 |
|
| 89 |
+
- Transformers 4.51.3
|
| 90 |
+
- Pytorch 2.6.0+cu124
|
| 91 |
+
- Datasets 3.5.1
|
| 92 |
+
- Tokenizers 0.21.1
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 378309148
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5fe42a9d1e85f31bb8e442114da193f143829b349d258615acc5bcf42b9da162
|
| 3 |
size 378309148
|
runs/May05_13-17-17_e4172518e05d/events.out.tfevents.1746451950.e4172518e05d.869.1
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7eb7ffc3bcd07c0b9462b0afdde9e55af776564152d807d210539c7961694f87
|
| 3 |
+
size 27315
|