finbert-ft-icar-a-mda-direct
This model is a fine-tuned version of ProsusAI/finbert on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 2.6969
- Accuracy: 0.5238
- Precision: 0.5217
- Recall: 0.5057
- F1: 0.4885
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 3
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|---|---|---|---|---|---|---|---|
| 1.4149 | 1.0 | 83 | 1.3897 | 0.3492 | 0.3407 | 0.3465 | 0.2913 |
| 1.0795 | 2.0 | 166 | 1.2172 | 0.3810 | 0.3806 | 0.3808 | 0.2989 |
| 0.9891 | 3.0 | 249 | 1.2202 | 0.3651 | 0.3639 | 0.3624 | 0.3012 |
| 0.9453 | 4.0 | 332 | 1.3485 | 0.3492 | 0.3462 | 0.3516 | 0.2667 |
| 0.8793 | 5.0 | 415 | 1.3342 | 0.3968 | 0.3583 | 0.3777 | 0.3338 |
| 0.8093 | 6.0 | 498 | 1.3721 | 0.3968 | 0.3802 | 0.3801 | 0.3722 |
| 0.7 | 7.0 | 581 | 1.5765 | 0.4444 | 0.3407 | 0.4216 | 0.3419 |
| 0.5749 | 8.0 | 664 | 1.5505 | 0.4444 | 0.4142 | 0.4252 | 0.4114 |
| 0.5182 | 9.0 | 747 | 1.6208 | 0.4444 | 0.4011 | 0.4189 | 0.3966 |
| 0.4217 | 10.0 | 830 | 1.6965 | 0.4603 | 0.4370 | 0.4422 | 0.4360 |
| 0.4098 | 11.0 | 913 | 1.7849 | 0.4603 | 0.4381 | 0.4473 | 0.4350 |
| 0.3067 | 12.0 | 996 | 1.9142 | 0.4444 | 0.4346 | 0.4365 | 0.4166 |
| 0.261 | 13.0 | 1079 | 2.0077 | 0.4286 | 0.4247 | 0.4207 | 0.4049 |
| 0.1653 | 14.0 | 1162 | 2.0661 | 0.4444 | 0.4452 | 0.4365 | 0.4177 |
| 0.1148 | 15.0 | 1245 | 2.1456 | 0.4921 | 0.4748 | 0.4740 | 0.4623 |
| 0.1325 | 16.0 | 1328 | 2.2233 | 0.4603 | 0.4483 | 0.4473 | 0.4345 |
| 0.0883 | 17.0 | 1411 | 2.2824 | 0.4762 | 0.4614 | 0.4581 | 0.4490 |
| 0.0906 | 18.0 | 1494 | 2.4244 | 0.4603 | 0.4426 | 0.4448 | 0.4354 |
| 0.0544 | 19.0 | 1577 | 2.4539 | 0.4603 | 0.4434 | 0.4473 | 0.4343 |
| 0.0432 | 20.0 | 1660 | 2.5901 | 0.4286 | 0.4310 | 0.4207 | 0.4048 |
| 0.0542 | 21.0 | 1743 | 2.5440 | 0.4603 | 0.4434 | 0.4473 | 0.4343 |
| 0.0179 | 22.0 | 1826 | 2.6234 | 0.4603 | 0.4483 | 0.4499 | 0.4324 |
| 0.0129 | 23.0 | 1909 | 2.7417 | 0.5079 | 0.5085 | 0.4899 | 0.4758 |
| 0.0284 | 24.0 | 1992 | 2.6969 | 0.5238 | 0.5217 | 0.5057 | 0.4885 |
| 0.0316 | 25.0 | 2075 | 2.8483 | 0.4921 | 0.4862 | 0.4765 | 0.4615 |
| 0.0089 | 26.0 | 2158 | 2.8401 | 0.4921 | 0.4748 | 0.4740 | 0.4623 |
| 0.0205 | 27.0 | 2241 | 2.9610 | 0.4762 | 0.4785 | 0.4632 | 0.4473 |
| 0.0045 | 28.0 | 2324 | 2.9589 | 0.4921 | 0.4843 | 0.4740 | 0.4624 |
| 0.0139 | 29.0 | 2407 | 2.9910 | 0.4762 | 0.4642 | 0.4632 | 0.4469 |
| 0.0113 | 30.0 | 2490 | 3.0800 | 0.4603 | 0.4515 | 0.4499 | 0.4321 |
| 0.0036 | 31.0 | 2573 | 3.1598 | 0.4762 | 0.4857 | 0.4657 | 0.4453 |
| 0.0044 | 32.0 | 2656 | 3.0963 | 0.4603 | 0.4482 | 0.4499 | 0.4319 |
| 0.0117 | 33.0 | 2739 | 3.1607 | 0.4444 | 0.4429 | 0.4365 | 0.4169 |
| 0.0084 | 34.0 | 2822 | 3.1653 | 0.4762 | 0.4628 | 0.4607 | 0.4484 |
| 0.0028 | 35.0 | 2905 | 3.2284 | 0.4286 | 0.4285 | 0.4207 | 0.4048 |
| 0.0025 | 36.0 | 2988 | 3.2340 | 0.4762 | 0.4726 | 0.4607 | 0.4487 |
| 0.0191 | 37.0 | 3071 | 3.2446 | 0.4444 | 0.4337 | 0.4340 | 0.4195 |
| 0.0082 | 38.0 | 3154 | 3.2743 | 0.4444 | 0.4414 | 0.4340 | 0.4198 |
| 0.0111 | 39.0 | 3237 | 3.2742 | 0.4444 | 0.4337 | 0.4340 | 0.4195 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 4.4.1
- Tokenizers 0.21.2
- Downloads last month
- 2
Model tree for abdiharyadi/finbert-ft-icar-a-mda-direct
Base model
ProsusAI/finbertEvaluation results
- Accuracy on generatorself-reported0.524
- Precision on generatorself-reported0.522
- Recall on generatorself-reported0.506
- F1 on generatorself-reported0.488