duynhatran's picture
End of training
7dec7e9 verified
|
raw
history blame
2.42 kB
metadata
library_name: transformers
license: mit
base_model: microsoft/deberta-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: deberta_textclassification
    results: []

deberta_textclassification

This model is a fine-tuned version of microsoft/deberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3436
  • Accuracy: 0.9520

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.3919 0.2924 500 0.2821 0.9184
0.274 0.5848 1000 0.2795 0.9362
0.2552 0.8772 1500 0.2469 0.9355
0.2148 1.1696 2000 0.3214 0.9421
0.1876 1.4620 2500 0.2636 0.9382
0.1505 1.7544 3000 0.2323 0.9467
0.1411 2.0468 3500 0.3445 0.9395
0.0676 2.3392 4000 0.3280 0.9414
0.1089 2.6316 4500 0.4225 0.9270
0.0888 2.9240 5000 0.2458 0.9520
0.0544 3.2164 5500 0.2877 0.9539
0.0366 3.5088 6000 0.3010 0.9553
0.0322 3.8012 6500 0.3508 0.9474
0.0313 4.0936 7000 0.3302 0.9520
0.0191 4.3860 7500 0.3527 0.9493
0.0118 4.6784 8000 0.3378 0.9513
0.0189 4.9708 8500 0.3436 0.9520

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 2.4.0
  • Tokenizers 0.19.1