slac-aroma
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0614
- Accuracy: 0.9563
- F1 Macro: 0.9095
- Precision Macro: 0.8951
- Recall Macro: 0.9256
- Total Tf: [1532, 70, 1532, 70]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 212
- num_epochs: 15
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|---|---|---|---|---|---|---|---|---|
| 0.4534 | 1.0 | 213 | 0.3296 | 0.9426 | 0.8903 | 0.8545 | 0.9413 | [1510, 92, 1510, 92] |
| 0.2968 | 2.0 | 426 | 0.2909 | 0.9320 | 0.8743 | 0.8335 | 0.9391 | [1493, 109, 1493, 109] |
| 0.2515 | 3.0 | 639 | 0.3899 | 0.9513 | 0.9017 | 0.8794 | 0.9286 | [1524, 78, 1524, 78] |
| 0.085 | 4.0 | 852 | 0.5771 | 0.9576 | 0.9121 | 0.8976 | 0.9283 | [1534, 68, 1534, 68] |
| 0.1745 | 5.0 | 1065 | 0.7260 | 0.9588 | 0.9117 | 0.9101 | 0.9133 | [1536, 66, 1536, 66] |
| 0.0626 | 6.0 | 1278 | 1.0108 | 0.9638 | 0.9205 | 0.9293 | 0.9123 | [1544, 58, 1544, 58] |
| 0.0219 | 7.0 | 1491 | 0.7773 | 0.9501 | 0.8992 | 0.8770 | 0.9260 | [1522, 80, 1522, 80] |
| 0.0237 | 8.0 | 1704 | 0.8809 | 0.9526 | 0.9028 | 0.8850 | 0.9235 | [1526, 76, 1526, 76] |
| 0.0182 | 9.0 | 1917 | 0.8934 | 0.9501 | 0.8988 | 0.8778 | 0.9240 | [1522, 80, 1522, 80] |
| 0.0148 | 10.0 | 2130 | 1.0707 | 0.9613 | 0.9174 | 0.9142 | 0.9207 | [1540, 62, 1540, 62] |
| 0.0008 | 11.0 | 2343 | 1.0090 | 0.9538 | 0.9050 | 0.8883 | 0.9242 | [1528, 74, 1528, 74] |
| 0.0005 | 12.0 | 2556 | 1.1176 | 0.9563 | 0.9074 | 0.9013 | 0.9138 | [1532, 70, 1532, 70] |
| 0.0004 | 13.0 | 2769 | 1.0370 | 0.9563 | 0.9092 | 0.8961 | 0.9237 | [1532, 70, 1532, 70] |
| 0.0033 | 14.0 | 2982 | 1.0996 | 0.9569 | 0.9100 | 0.8989 | 0.9221 | [1533, 69, 1533, 69] |
| 0.0003 | 15.0 | 3195 | 1.0614 | 0.9563 | 0.9095 | 0.8951 | 0.9256 | [1532, 70, 1532, 70] |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support