rlcc-new-palate-upsample_replacement-absa-max
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.5839
- Accuracy: 0.5449
- F1 Macro: 0.5505
- Precision Macro: 0.5946
- Recall Macro: 0.5420
- F1 Micro: 0.5449
- Precision Micro: 0.5449
- Recall Micro: 0.5449
- Total Tf: [97, 81, 275, 81]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 21
- num_epochs: 25
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.1769 | 1.0 | 22 | 1.1092 | 0.3371 | 0.1886 | 0.1971 | 0.3291 | 0.3371 | 0.3371 | 0.3371 | [60, 118, 238, 118] |
| 1.0296 | 2.0 | 44 | 1.1047 | 0.3539 | 0.2005 | 0.3156 | 0.3390 | 0.3539 | 0.3539 | 0.3539 | [63, 115, 241, 115] |
| 0.8965 | 3.0 | 66 | 1.0251 | 0.4494 | 0.4440 | 0.4374 | 0.4562 | 0.4494 | 0.4494 | 0.4494 | [80, 98, 258, 98] |
| 0.7047 | 4.0 | 88 | 1.0048 | 0.5281 | 0.5123 | 0.5123 | 0.5331 | 0.5281 | 0.5281 | 0.5281 | [94, 84, 272, 84] |
| 0.5384 | 5.0 | 110 | 1.0360 | 0.5337 | 0.5366 | 0.5379 | 0.5369 | 0.5337 | 0.5337 | 0.5337 | [95, 83, 273, 83] |
| 0.4743 | 6.0 | 132 | 1.1596 | 0.5337 | 0.5378 | 0.5474 | 0.5387 | 0.5337 | 0.5337 | 0.5337 | [95, 83, 273, 83] |
| 0.3305 | 7.0 | 154 | 1.2948 | 0.5 | 0.5026 | 0.5492 | 0.4972 | 0.5 | 0.5 | 0.5 | [89, 89, 267, 89] |
| 0.248 | 8.0 | 176 | 1.2884 | 0.5618 | 0.5644 | 0.6166 | 0.5576 | 0.5618 | 0.5618 | 0.5618 | [100, 78, 278, 78] |
| 0.2021 | 9.0 | 198 | 1.4366 | 0.5449 | 0.5492 | 0.5942 | 0.5419 | 0.5449 | 0.5449 | 0.5449 | [97, 81, 275, 81] |
| 0.1374 | 10.0 | 220 | 1.4121 | 0.5506 | 0.5538 | 0.6038 | 0.5467 | 0.5506 | 0.5506 | 0.5506 | [98, 80, 276, 80] |
| 0.1042 | 11.0 | 242 | 1.4868 | 0.5337 | 0.5374 | 0.5801 | 0.5329 | 0.5337 | 0.5337 | 0.5337 | [95, 83, 273, 83] |
| 0.0927 | 12.0 | 264 | 1.5962 | 0.5281 | 0.5321 | 0.5937 | 0.5249 | 0.5281 | 0.5281 | 0.5281 | [94, 84, 272, 84] |
| 0.1098 | 13.0 | 286 | 1.5839 | 0.5449 | 0.5505 | 0.5946 | 0.5420 | 0.5449 | 0.5449 | 0.5449 | [97, 81, 275, 81] |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support