rlcc-new-aroma-upsample_replacement-absa-None
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.3614
- Accuracy: 0.6824
- F1 Macro: 0.6658
- Precision Macro: 0.7071
- Recall Macro: 0.6484
- F1 Micro: 0.6824
- Precision Micro: 0.6824
- Recall Micro: 0.6824
- Total Tf: [174, 81, 429, 81]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- num_epochs: 25
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.1148 | 1.0 | 41 | 1.0793 | 0.3882 | 0.3708 | 0.3843 | 0.4060 | 0.3882 | 0.3882 | 0.3882 | [99, 156, 354, 156] |
| 0.8951 | 2.0 | 82 | 0.9204 | 0.5451 | 0.5446 | 0.5509 | 0.5735 | 0.5451 | 0.5451 | 0.5451 | [139, 116, 394, 116] |
| 0.6693 | 3.0 | 123 | 0.7705 | 0.6784 | 0.6628 | 0.6642 | 0.6665 | 0.6784 | 0.6784 | 0.6784 | [173, 82, 428, 82] |
| 0.5051 | 4.0 | 164 | 0.8283 | 0.6784 | 0.6503 | 0.6798 | 0.6401 | 0.6784 | 0.6784 | 0.6784 | [173, 82, 428, 82] |
| 0.3722 | 5.0 | 205 | 0.8845 | 0.6784 | 0.6614 | 0.6991 | 0.6442 | 0.6784 | 0.6784 | 0.6784 | [173, 82, 428, 82] |
| 0.2585 | 6.0 | 246 | 0.8958 | 0.7020 | 0.6868 | 0.7045 | 0.6768 | 0.7020 | 0.7020 | 0.7020 | [179, 76, 434, 76] |
| 0.1476 | 7.0 | 287 | 1.0381 | 0.6902 | 0.6744 | 0.7059 | 0.6601 | 0.6902 | 0.6902 | 0.6902 | [176, 79, 431, 79] |
| 0.1345 | 8.0 | 328 | 1.1312 | 0.6745 | 0.6534 | 0.7081 | 0.6325 | 0.6745 | 0.6745 | 0.6745 | [172, 83, 427, 83] |
| 0.0914 | 9.0 | 369 | 1.1266 | 0.6824 | 0.6623 | 0.7043 | 0.6442 | 0.6824 | 0.6824 | 0.6824 | [174, 81, 429, 81] |
| 0.0854 | 10.0 | 410 | 1.2692 | 0.6706 | 0.6452 | 0.7031 | 0.6245 | 0.6706 | 0.6706 | 0.6706 | [171, 84, 426, 84] |
| 0.0462 | 11.0 | 451 | 1.3614 | 0.6824 | 0.6658 | 0.7071 | 0.6484 | 0.6824 | 0.6824 | 0.6824 | [174, 81, 429, 81] |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support