ES-ENG-mDeBERTa-sentiment

This model is a fine-tuned version of microsoft/mdeberta-v3-base on a Custom dataset.

The best model (stopped after 20 epochs) achieves the following results on the evaluation set:

  • Loss:0.7549
  • Accuracy: 0.6806
  • F1: 0.6783
  • Precision: 0.6775
  • Recall: 0.6806

Intended uses & limitations

Note that commercial use with this model is prohibited.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
1.0833 1.0 208 1.0218 0.3835 0.2413 0.4619 0.3835
0.9826 2.0 416 0.9220 0.5673 0.5524 0.5583 0.5673
0.8951 3.0 624 0.8435 0.5916 0.5836 0.5853 0.5916
0.8377 4.0 832 0.8200 0.5994 0.5916 0.5931 0.5994
0.8083 5.0 1040 0.8110 0.5977 0.5960 0.5957 0.5977
0.7854 6.0 1248 0.8009 0.5994 0.5943 0.5916 0.5994
0.7699 7.0 1456 0.7919 0.6084 0.6065 0.6055 0.6084
0.7489 8.0 1664 0.7827 0.6230 0.6164 0.6192 0.6230
0.7323 9.0 1872 0.7739 0.6272 0.6251 0.6246 0.6272
0.7162 10.0 2080 0.7725 0.6408 0.6332 0.6351 0.6408
0.6958 11.0 2288 0.7636 0.6414 0.6393 0.6383 0.6414
0.6816 12.0 2496 0.7582 0.6495 0.6491 0.6516 0.6495
0.6706 13.0 2704 0.7492 0.6628 0.6607 0.6595 0.6628
0.6569 14.0 2912 0.7554 0.6586 0.6580 0.6579 0.6586
0.6422 15.0 3120 0.7525 0.6676 0.6661 0.6667 0.6676
0.6408 16.0 3328 0.7527 0.6660 0.6658 0.6669 0.6660
0.6273 17.0 3536 0.7483 0.6712 0.6700 0.6710 0.6712
0.6186 18.0 3744 0.7531 0.6748 0.6731 0.6727 0.6748
0.6107 19.0 3952 0.7482 0.6799 0.6786 0.6782 0.6799
0.6055 20.0 4160 0.7549 0.6806 0.6783 0.6775 0.6806
0.6026 21.0 4368 0.7574 0.6725 0.6719 0.6733 0.6725
0.5906 22.0 4576 0.7587 0.6728 0.6721 0.6723 0.6728
0.5888 23.0 4784 0.7621 0.6761 0.6756 0.6758 0.6761

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support