indobert-post-training-fin-sa-3

This model is a fine-tuned version of elidle/indobert-fin_news-mlm-3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2431
  • Accuracy: 0.9615

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 8
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.9874 0.1961 10 0.6666 0.7582
0.5474 0.3922 20 0.4689 0.7802
0.4264 0.5882 30 0.2823 0.9286
0.2774 0.7843 40 0.2123 0.9286
0.1896 0.9804 50 0.2001 0.9341
0.1534 1.1765 60 0.1659 0.9396
0.1181 1.3725 70 0.1622 0.9396
0.0913 1.5686 80 0.1629 0.9505
0.1362 1.7647 90 0.1882 0.9505
0.1469 1.9608 100 0.1642 0.9505
0.0434 2.1569 110 0.1462 0.9615
0.0287 2.3529 120 0.1798 0.9451
0.062 2.5490 130 0.1734 0.9505
0.061 2.7451 140 0.2043 0.9560
0.1002 2.9412 150 0.1924 0.9670
0.0138 3.1373 160 0.2432 0.9560
0.0563 3.3333 170 0.2589 0.9451
0.007 3.5294 180 0.2466 0.9560
0.0241 3.7255 190 0.2431 0.9615

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for elidle/indobert-post-training-fin-sa-exp-1

Finetuned
(5)
this model