ES-ENG-mBERT-sentiment

This model is a fine-tuned version of bert-base-multilingual-cased on a Custom dataset.

The best model (stopped after 14 epochs) achieves the following results on the evaluation set:

  • Loss: 0.8110
  • Accuracy: 0.6307
  • F1: 0.6298
  • Precision: 0.6291
  • Recall: 0.6307

Intended uses & limitations

Note that commercial use with this model is prohibited.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
1.063 1.0 208 0.9989 0.4731 0.4044 0.4885 0.4731
0.9664 2.0 416 0.9144 0.5262 0.4845 0.5270 0.5262
0.9067 3.0 624 0.8648 0.5896 0.5844 0.5935 0.5896
0.8572 4.0 832 0.8294 0.6065 0.5984 0.6102 0.6065
0.8168 5.0 1040 0.8101 0.6107 0.6092 0.6119 0.6107
0.7897 6.0 1248 0.8213 0.6074 0.6015 0.6018 0.6074
0.7568 7.0 1456 0.7992 0.6194 0.6181 0.6176 0.6194
0.7465 8.0 1664 0.8089 0.6246 0.6183 0.6206 0.6246
0.7223 9.0 1872 0.7988 0.6236 0.6214 0.6207 0.6236
0.7045 10.0 2080 0.8390 0.6165 0.6080 0.6126 0.6165
0.6888 11.0 2288 0.8042 0.6291 0.6260 0.6257 0.6291
0.671 12.0 2496 0.8088 0.6239 0.6212 0.6216 0.6239
0.6543 13.0 2704 0.8104 0.6256 0.6227 0.6216 0.6256
0.6409 14.0 2912 0.8110 0.6307 0.6298 0.6291 0.6307
0.6275 15.0 3120 0.8127 0.6298 0.6292 0.6299 0.6298
0.6176 16.0 3328 0.8334 0.6252 0.6217 0.6206 0.6252
0.6096 17.0 3536 0.8331 0.6256 0.6210 0.6210 0.6256

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support