rlcc-aroma-upsample_replacement-absa-avg

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5902
  • Accuracy: 0.7707
  • F1 Macro: 0.7015
  • Precision Macro: 0.7008
  • Recall Macro: 0.7091
  • Total Tf: [316, 94, 1136, 94]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 51
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Macro Precision Macro Recall Macro Total Tf
1.1003 1.0 52 1.0973 0.6073 0.4395 0.4182 0.4880 [249, 161, 1069, 161]
0.9934 2.0 104 1.1124 0.6707 0.5499 0.6186 0.5663 [275, 135, 1095, 135]
0.8138 3.0 156 1.1006 0.7073 0.6159 0.6228 0.6121 [290, 120, 1110, 120]
0.6792 4.0 208 1.2207 0.6927 0.5975 0.6140 0.6475 [284, 126, 1104, 126]
0.6213 5.0 260 1.2312 0.7049 0.6159 0.6401 0.6586 [289, 121, 1109, 121]
0.5836 6.0 312 1.3172 0.7024 0.6092 0.6290 0.6611 [288, 122, 1108, 122]
0.4774 7.0 364 1.3690 0.7146 0.6333 0.6364 0.6577 [293, 117, 1113, 117]
0.4054 8.0 416 1.3341 0.7341 0.6562 0.6671 0.6826 [301, 109, 1121, 109]
0.348 9.0 468 1.3494 0.7610 0.6872 0.6883 0.7002 [312, 98, 1132, 98]
0.3111 10.0 520 1.3309 0.7756 0.7080 0.7062 0.7178 [318, 92, 1138, 92]
0.2831 11.0 572 1.4596 0.7439 0.6692 0.6790 0.6922 [305, 105, 1125, 105]
0.2606 12.0 624 1.4924 0.7610 0.6884 0.6872 0.6978 [312, 98, 1132, 98]
0.2118 13.0 676 1.5774 0.7463 0.6739 0.6763 0.6941 [306, 104, 1126, 104]
0.1923 14.0 728 1.5830 0.7683 0.6989 0.6984 0.7096 [315, 95, 1135, 95]
0.1543 15.0 780 1.5902 0.7707 0.7015 0.7008 0.7091 [316, 94, 1136, 94]

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
-
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support