slac-new-taste-upsample_replacement
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6966
- Accuracy: 0.9121
- F1 Macro: 0.8844
- Precision Macro: 0.8807
- Recall Macro: 0.8884
- F1 Micro: 0.9121
- Precision Micro: 0.9121
- Recall Micro: 0.9121
- Total Tf: [1411, 136, 1411, 136]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 277
- num_epochs: 15
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.2606 | 1.0 | 278 | 0.2660 | 0.9082 | 0.8807 | 0.8732 | 0.8892 | 0.9082 | 0.9082 | 0.9082 | [1405, 142, 1405, 142] |
| 0.1679 | 2.0 | 556 | 0.2799 | 0.9134 | 0.8867 | 0.8812 | 0.8926 | 0.9134 | 0.9134 | 0.9134 | [1413, 134, 1413, 134] |
| 0.1035 | 3.0 | 834 | 0.3705 | 0.9095 | 0.8802 | 0.8789 | 0.8815 | 0.9095 | 0.9095 | 0.9095 | [1407, 140, 1407, 140] |
| 0.0735 | 4.0 | 1112 | 0.3818 | 0.9089 | 0.8809 | 0.8752 | 0.8871 | 0.9089 | 0.9089 | 0.9089 | [1406, 141, 1406, 141] |
| 0.0473 | 5.0 | 1390 | 0.3992 | 0.9101 | 0.8823 | 0.8772 | 0.8879 | 0.9101 | 0.9101 | 0.9101 | [1408, 139, 1408, 139] |
| 0.0461 | 6.0 | 1668 | 0.4821 | 0.9108 | 0.8831 | 0.8782 | 0.8883 | 0.9108 | 0.9108 | 0.9108 | [1409, 138, 1409, 138] |
| 0.0197 | 7.0 | 1946 | 0.5172 | 0.9063 | 0.8769 | 0.8729 | 0.8811 | 0.9063 | 0.9063 | 0.9063 | [1402, 145, 1402, 145] |
| 0.0148 | 8.0 | 2224 | 0.5698 | 0.9095 | 0.8806 | 0.8781 | 0.8832 | 0.9095 | 0.9095 | 0.9095 | [1407, 140, 1407, 140] |
| 0.0203 | 9.0 | 2502 | 0.5773 | 0.9095 | 0.8810 | 0.8773 | 0.8849 | 0.9095 | 0.9095 | 0.9095 | [1407, 140, 1407, 140] |
| 0.0037 | 10.0 | 2780 | 0.6239 | 0.9101 | 0.8816 | 0.8787 | 0.8845 | 0.9101 | 0.9101 | 0.9101 | [1408, 139, 1408, 139] |
| 0.0035 | 11.0 | 3058 | 0.6641 | 0.9114 | 0.8806 | 0.8866 | 0.8751 | 0.9114 | 0.9114 | 0.9114 | [1410, 137, 1410, 137] |
| 0.0044 | 12.0 | 3336 | 0.6824 | 0.9089 | 0.8799 | 0.8771 | 0.8828 | 0.9089 | 0.9089 | 0.9089 | [1406, 141, 1406, 141] |
| 0.0035 | 13.0 | 3614 | 0.6953 | 0.9114 | 0.8833 | 0.8804 | 0.8862 | 0.9114 | 0.9114 | 0.9114 | [1410, 137, 1410, 137] |
| 0.0021 | 14.0 | 3892 | 0.7007 | 0.9108 | 0.8829 | 0.8786 | 0.8875 | 0.9108 | 0.9108 | 0.9108 | [1409, 138, 1409, 138] |
| 0.0049 | 15.0 | 4170 | 0.6966 | 0.9121 | 0.8844 | 0.8807 | 0.8884 | 0.9121 | 0.9121 | 0.9121 | [1411, 136, 1411, 136] |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support