causal_classifier_base_2025b
This model is a fine-tuned version of klue/roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6842
- Accuracy: 0.9226
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 84
- eval_batch_size: 84
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.6451 | 1.0 | 677 | 0.3905 | 0.8486 |
| 0.4548 | 2.0 | 1354 | 0.3248 | 0.8786 |
| 0.333 | 3.0 | 2031 | 0.3132 | 0.8862 |
| 0.2618 | 4.0 | 2708 | 0.2855 | 0.9133 |
| 0.2271 | 5.0 | 3385 | 0.3195 | 0.8981 |
| 0.169 | 6.0 | 4062 | 0.3678 | 0.8926 |
| 0.1417 | 7.0 | 4739 | 0.3716 | 0.9044 |
| 0.1325 | 8.0 | 5416 | 0.3965 | 0.9031 |
| 0.1056 | 9.0 | 6093 | 0.4048 | 0.9116 |
| 0.0914 | 10.0 | 6770 | 0.4088 | 0.9095 |
| 0.085 | 11.0 | 7447 | 0.4272 | 0.9192 |
| 0.0709 | 12.0 | 8124 | 0.4835 | 0.9074 |
| 0.0657 | 13.0 | 8801 | 0.4501 | 0.9129 |
| 0.0633 | 14.0 | 9478 | 0.4913 | 0.9082 |
| 0.0518 | 15.0 | 10155 | 0.4659 | 0.9188 |
| 0.05 | 16.0 | 10832 | 0.5005 | 0.9095 |
| 0.0438 | 17.0 | 11509 | 0.5048 | 0.9146 |
| 0.0391 | 18.0 | 12186 | 0.5279 | 0.9133 |
| 0.0363 | 19.0 | 12863 | 0.5297 | 0.9078 |
| 0.0366 | 20.0 | 13540 | 0.5633 | 0.9069 |
| 0.0308 | 21.0 | 14217 | 0.5911 | 0.9124 |
| 0.0294 | 22.0 | 14894 | 0.5519 | 0.9167 |
| 0.0282 | 23.0 | 15571 | 0.6248 | 0.9133 |
| 0.0223 | 24.0 | 16248 | 0.5584 | 0.9150 |
| 0.0241 | 25.0 | 16925 | 0.6267 | 0.9095 |
| 0.0213 | 26.0 | 17602 | 0.6172 | 0.9129 |
| 0.0197 | 27.0 | 18279 | 0.6328 | 0.9133 |
| 0.0186 | 28.0 | 18956 | 0.6634 | 0.9103 |
| 0.0158 | 29.0 | 19633 | 0.6469 | 0.9171 |
| 0.0155 | 30.0 | 20310 | 0.6782 | 0.9150 |
| 0.0131 | 31.0 | 20987 | 0.6496 | 0.9192 |
| 0.0119 | 32.0 | 21664 | 0.6960 | 0.9158 |
| 0.0102 | 33.0 | 22341 | 0.6467 | 0.9179 |
| 0.0107 | 34.0 | 23018 | 0.6842 | 0.9226 |
| 0.0119 | 35.0 | 23695 | 0.6582 | 0.9222 |
| 0.011 | 36.0 | 24372 | 0.6287 | 0.9188 |
| 0.0085 | 37.0 | 25049 | 0.6915 | 0.9192 |
| 0.0074 | 38.0 | 25726 | 0.7071 | 0.9179 |
| 0.0075 | 39.0 | 26403 | 0.6916 | 0.9192 |
| 0.0069 | 40.0 | 27080 | 0.6898 | 0.9141 |
| 0.0065 | 41.0 | 27757 | 0.7014 | 0.9184 |
| 0.0069 | 42.0 | 28434 | 0.7259 | 0.9171 |
| 0.0045 | 43.0 | 29111 | 0.7370 | 0.9192 |
| 0.0041 | 44.0 | 29788 | 0.7312 | 0.9192 |
| 0.0055 | 45.0 | 30465 | 0.7397 | 0.9196 |
| 0.0034 | 46.0 | 31142 | 0.7590 | 0.9201 |
| 0.0033 | 47.0 | 31819 | 0.7592 | 0.9184 |
| 0.003 | 48.0 | 32496 | 0.7662 | 0.9201 |
| 0.0033 | 49.0 | 33173 | 0.7650 | 0.9209 |
| 0.0033 | 50.0 | 33850 | 0.7634 | 0.9217 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.4.0+cu124
- Datasets 2.21.0
- Tokenizers 0.13.3
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support