rlcc-taste-upsample_replacement-absa-None

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9818
  • Accuracy: 0.6268
  • F1 Macro: 0.6744
  • Precision Macro: 0.6735
  • Recall Macro: 0.6758
  • Total Tf: [257, 153, 1077, 153]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 90
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Macro Precision Macro Recall Macro Total Tf
1.1008 1.0 91 1.1034 0.4098 0.4187 0.6948 0.5326 [168, 242, 988, 242]
0.8457 2.0 182 0.9629 0.6024 0.6511 0.6508 0.6525 [247, 163, 1067, 163]
0.5988 3.0 273 1.0210 0.6220 0.6676 0.6704 0.6671 [255, 155, 1075, 155]
0.3711 4.0 364 1.2443 0.6268 0.6738 0.6732 0.6770 [257, 153, 1077, 153]
0.2455 5.0 455 1.3622 0.6439 0.6881 0.6873 0.6911 [264, 146, 1084, 146]
0.2254 6.0 546 1.4938 0.6195 0.6667 0.6656 0.6692 [254, 156, 1074, 156]
0.1683 7.0 637 1.7064 0.6317 0.6770 0.6759 0.6864 [259, 151, 1079, 151]
0.1288 8.0 728 1.7828 0.6122 0.6593 0.6616 0.6739 [251, 159, 1071, 159]
0.1041 9.0 819 1.8101 0.6366 0.6826 0.6906 0.6793 [261, 149, 1081, 149]
0.0638 10.0 910 1.9818 0.6268 0.6744 0.6735 0.6758 [257, 153, 1077, 153]

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support