rlcc-appearance-upsample_replacement-absa-min
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.5289
- Accuracy: 0.6829
- F1 Macro: 0.6528
- Precision Macro: 0.6573
- Recall Macro: 0.6497
- Total Tf: [280, 130, 1100, 130]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 65
- num_epochs: 25
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|---|---|---|---|---|---|---|---|---|
| 1.104 | 1.0 | 66 | 1.0942 | 0.5707 | 0.4639 | 0.4375 | 0.5035 | [234, 176, 1054, 176] |
| 0.9421 | 2.0 | 132 | 0.9844 | 0.6122 | 0.5511 | 0.5757 | 0.6227 | [251, 159, 1071, 159] |
| 0.7491 | 3.0 | 198 | 1.0425 | 0.6268 | 0.5706 | 0.6097 | 0.6372 | [257, 153, 1077, 153] |
| 0.6547 | 4.0 | 264 | 1.1165 | 0.6317 | 0.5931 | 0.6007 | 0.6266 | [259, 151, 1079, 151] |
| 0.5895 | 5.0 | 330 | 1.2059 | 0.6341 | 0.5947 | 0.6001 | 0.6282 | [260, 150, 1080, 150] |
| 0.5299 | 6.0 | 396 | 1.2221 | 0.6488 | 0.6078 | 0.6120 | 0.6197 | [266, 144, 1086, 144] |
| 0.4922 | 7.0 | 462 | 1.1999 | 0.6512 | 0.6145 | 0.6147 | 0.6191 | [267, 143, 1087, 143] |
| 0.413 | 8.0 | 528 | 1.3816 | 0.6439 | 0.6047 | 0.6067 | 0.6174 | [264, 146, 1084, 146] |
| 0.4016 | 9.0 | 594 | 1.3556 | 0.6439 | 0.6112 | 0.6085 | 0.6169 | [264, 146, 1084, 146] |
| 0.3321 | 10.0 | 660 | 1.3395 | 0.6561 | 0.6233 | 0.6204 | 0.6316 | [269, 141, 1089, 141] |
| 0.3126 | 11.0 | 726 | 1.4235 | 0.6683 | 0.6368 | 0.6363 | 0.6445 | [274, 136, 1094, 136] |
| 0.2674 | 12.0 | 792 | 1.4367 | 0.6707 | 0.6372 | 0.6358 | 0.6443 | [275, 135, 1095, 135] |
| 0.257 | 13.0 | 858 | 1.3366 | 0.6902 | 0.6595 | 0.6616 | 0.6585 | [283, 127, 1103, 127] |
| 0.2149 | 14.0 | 924 | 1.4133 | 0.6683 | 0.6346 | 0.6391 | 0.6370 | [274, 136, 1094, 136] |
| 0.1996 | 15.0 | 990 | 1.3019 | 0.6927 | 0.6610 | 0.6746 | 0.6538 | [284, 126, 1104, 126] |
| 0.1883 | 16.0 | 1056 | 1.4445 | 0.6585 | 0.6254 | 0.6285 | 0.6261 | [270, 140, 1090, 140] |
| 0.1642 | 17.0 | 1122 | 1.4636 | 0.6707 | 0.6400 | 0.6434 | 0.6403 | [275, 135, 1095, 135] |
| 0.166 | 18.0 | 1188 | 1.4318 | 0.6927 | 0.6622 | 0.6661 | 0.6592 | [284, 126, 1104, 126] |
| 0.1452 | 19.0 | 1254 | 1.4605 | 0.6927 | 0.6626 | 0.6662 | 0.6605 | [284, 126, 1104, 126] |
| 0.1569 | 20.0 | 1320 | 1.4895 | 0.6805 | 0.6495 | 0.6543 | 0.6463 | [279, 131, 1099, 131] |
| 0.1372 | 21.0 | 1386 | 1.5280 | 0.6780 | 0.6488 | 0.6504 | 0.6478 | [278, 132, 1098, 132] |
| 0.1436 | 22.0 | 1452 | 1.5309 | 0.6878 | 0.6584 | 0.6608 | 0.6569 | [282, 128, 1102, 128] |
| 0.1251 | 23.0 | 1518 | 1.5289 | 0.6829 | 0.6528 | 0.6573 | 0.6497 | [280, 130, 1100, 130] |
Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0+cu118
- Tokenizers 0.21.0
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support