indobert-base-p1-multilabel-indonesian-hate-speech-new
This model is a fine-tuned version of indobenchmark/indobert-base-p1 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3335
- F1: 0.7926
- Roc Auc: 0.8709
- Accuracy: 0.7253
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|---|---|---|---|---|---|---|
| 0.2887 | 1.0 | 659 | 0.2198 | 0.7353 | 0.8261 | 0.6110 |
| 0.1999 | 2.0 | 1318 | 0.2044 | 0.7658 | 0.8579 | 0.6279 |
| 0.1523 | 3.0 | 1977 | 0.2244 | 0.7615 | 0.8427 | 0.6699 |
| 0.0755 | 4.0 | 2636 | 0.2394 | 0.7775 | 0.8535 | 0.7027 |
| 0.0537 | 5.0 | 3295 | 0.2601 | 0.7810 | 0.8641 | 0.7075 |
| 0.0411 | 6.0 | 3954 | 0.2834 | 0.7774 | 0.8609 | 0.6993 |
| 0.0219 | 7.0 | 4613 | 0.2974 | 0.7762 | 0.8478 | 0.7172 |
| 0.0142 | 8.0 | 5272 | 0.2976 | 0.7823 | 0.8684 | 0.7069 |
| 0.0124 | 9.0 | 5931 | 0.3070 | 0.7883 | 0.8656 | 0.7211 |
| 0.0081 | 10.0 | 6590 | 0.3132 | 0.7862 | 0.8671 | 0.7147 |
| 0.0073 | 11.0 | 7249 | 0.3271 | 0.7886 | 0.8632 | 0.7289 |
| 0.0052 | 12.0 | 7908 | 0.3254 | 0.7866 | 0.8645 | 0.7246 |
| 0.0044 | 13.0 | 8567 | 0.3316 | 0.7906 | 0.8656 | 0.7308 |
| 0.004 | 14.0 | 9226 | 0.3316 | 0.7921 | 0.8715 | 0.7221 |
| 0.0039 | 15.0 | 9885 | 0.3335 | 0.7926 | 0.8709 | 0.7253 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Model tree for PaceKW/indobert-base-p1-multilabel-indonesian-hate-speech-new
Base model
indobenchmark/indobert-base-p1