svenbl80/roberta-base-finetuned-chatdoc-V5

This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0133
  • Validation Loss: 0.4054
  • Train Accuracy: 0.9091
  • Epoch: 27

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 360, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Train Accuracy Epoch
1.0831 1.0759 0.4848 0
0.9794 0.8853 0.5758 1
0.7951 0.7018 0.7273 2
0.6268 0.6546 0.7273 3
0.5226 0.5212 0.7778 4
0.4522 0.3905 0.8788 5
0.3329 0.3483 0.8990 6
0.2571 0.3290 0.9091 7
0.2283 0.3665 0.8485 8
0.2113 0.3656 0.8788 9
0.1757 0.3502 0.8586 10
0.1571 0.4281 0.8687 11
0.1274 0.3813 0.8687 12
0.1039 0.2856 0.9293 13
0.1168 0.3923 0.8889 14
0.1126 0.4691 0.8485 15
0.0674 0.2973 0.9192 16
0.0706 0.3268 0.8788 17
0.0495 0.3354 0.8990 18
0.0689 0.3869 0.9091 19
0.0340 0.3705 0.8990 20
0.0309 0.4067 0.8990 21
0.0375 0.3645 0.9091 22
0.0175 0.3753 0.9091 23
0.0197 0.4047 0.9091 24
0.0174 0.4093 0.9091 25
0.0144 0.4057 0.9091 26
0.0133 0.4054 0.9091 27

Framework versions

  • Transformers 4.28.0
  • TensorFlow 2.9.1
  • Datasets 2.15.0
  • Tokenizers 0.13.3
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support