hate_speech / README.md
franac1's picture
End of training
835f9ff verified
|
raw
history blame
4.82 kB
metadata
library_name: transformers
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: hate_speech
    results: []

hate_speech

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7687
  • Accuracy: 0.8017
  • Auc Score: 0.8728
  • F1: 0.8298
  • Precision: 0.8074
  • Recall: 0.8534

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy Auc Score F1 Precision Recall
0.6445 0.0923 100 0.5441 0.7435 0.8052 0.7751 0.7701 0.7801
0.5767 0.1845 200 0.5260 0.7555 0.8345 0.7721 0.8179 0.7313
0.5126 0.2768 300 0.5090 0.7629 0.8450 0.8068 0.7493 0.8738
0.4723 0.3690 400 0.5557 0.7417 0.8505 0.7363 0.8728 0.6368
0.511 0.4613 500 0.4766 0.7823 0.8525 0.8106 0.7991 0.8225
0.5082 0.5535 600 0.4947 0.7915 0.8565 0.8239 0.7900 0.8607
0.4494 0.6458 700 0.4976 0.7763 0.8560 0.8032 0.8003 0.8062
0.4816 0.7380 800 0.4648 0.7827 0.8624 0.8111 0.7992 0.8233
0.4665 0.8303 900 0.4649 0.7887 0.8656 0.8200 0.7926 0.8493
0.5226 0.9225 1000 0.4537 0.7929 0.8666 0.8158 0.8222 0.8094
0.4643 1.0148 1100 0.4747 0.7998 0.8676 0.8287 0.8040 0.8550
0.3617 1.1070 1200 0.5402 0.7943 0.8668 0.8213 0.8084 0.8347
0.3439 1.1993 1300 0.5924 0.7966 0.8703 0.8267 0.7988 0.8567
0.3482 1.2915 1400 0.5369 0.8003 0.8681 0.8287 0.8060 0.8526
0.3855 1.3838 1500 0.5213 0.7966 0.8702 0.8205 0.8202 0.8208
0.335 1.4760 1600 0.5387 0.7929 0.8702 0.8176 0.8159 0.8192
0.382 1.5683 1700 0.5267 0.7924 0.8710 0.8109 0.8377 0.7858
0.341 1.6605 1800 0.6565 0.7957 0.8722 0.8293 0.7871 0.8762
0.3492 1.7528 1900 0.5635 0.7957 0.8725 0.8298 0.7855 0.8795
0.3861 1.8450 2000 0.5204 0.7998 0.8752 0.8281 0.8063 0.8510
0.3451 1.9373 2100 0.5854 0.7984 0.8757 0.8316 0.7893 0.8787
0.2915 2.0295 2200 0.6308 0.8021 0.8744 0.8354 0.7897 0.8868
0.2264 2.1218 2300 0.7711 0.7984 0.8741 0.8234 0.8172 0.8298
0.244 2.2140 2400 0.7302 0.8030 0.8742 0.8346 0.7960 0.8770
0.2477 2.3063 2500 0.8263 0.7915 0.8721 0.8154 0.8180 0.8127
0.2356 2.3985 2600 0.8275 0.7980 0.8734 0.8301 0.7926 0.8713
0.2122 2.4908 2700 0.8132 0.7980 0.8723 0.8234 0.8155 0.8314
0.2443 2.5830 2800 0.7874 0.8007 0.8728 0.8269 0.8139 0.8404
0.2275 2.6753 2900 0.7503 0.8003 0.8738 0.8322 0.7938 0.8746
0.2476 2.7675 3000 0.7822 0.7957 0.8731 0.8206 0.8163 0.8249
0.1961 2.8598 3100 0.7780 0.8021 0.8731 0.8304 0.8071 0.8550
0.2536 2.9520 3200 0.7687 0.8017 0.8728 0.8298 0.8074 0.8534

Framework versions

  • Transformers 4.52.4
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1