xlm_r_idiom_classifier

This model is a fine-tuned version of xlm-roberta-base on the gsarti/magpie dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2194
  • Accuracy: 0.9447
  • F1: 0.9637
  • Precision: 0.9717
  • Recall: 0.9558

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
2.2827 0.1599 100 0.5334 0.7679 0.8687 0.7679 1.0
1.7807 0.3199 200 0.4303 0.8030 0.8662 0.9052 0.8304
1.5338 0.4798 300 0.3397 0.8621 0.9109 0.9037 0.9183
1.4698 0.6397 400 0.2908 0.8878 0.9289 0.9045 0.9546
1.3709 0.7997 500 0.2463 0.9094 0.9428 0.9142 0.9733
1.0718 0.9596 600 0.2248 0.9094 0.9400 0.9552 0.9253
0.9184 1.1184 700 0.2056 0.9186 0.9460 0.9647 0.9279
0.8307 1.2783 800 0.2476 0.9006 0.9324 0.9751 0.8934
0.8787 1.4382 900 0.1804 0.9357 0.9585 0.9507 0.9663
0.7389 1.5982 1000 0.1726 0.9375 0.9591 0.9631 0.9552
0.7945 1.7581 1100 0.1800 0.9330 0.9559 0.9662 0.9458
0.7269 1.9180 1200 0.2204 0.9170 0.9438 0.9829 0.9077
0.4623 2.0768 1300 0.1766 0.9415 0.9617 0.9668 0.9566
0.4865 2.2367 1400 0.1940 0.9348 0.9567 0.9748 0.9394
0.4601 2.3966 1500 0.1675 0.9417 0.9616 0.9747 0.9487
0.4357 2.5566 1600 0.1919 0.9370 0.9583 0.9751 0.9420
0.4754 2.7165 1700 0.1699 0.9411 0.9613 0.9707 0.9520
0.4924 2.8764 1800 0.1600 0.9453 0.9642 0.9712 0.9572
0.3183 3.0352 1900 0.1744 0.9426 0.9623 0.9722 0.9525
0.3434 3.1951 2000 0.1726 0.9458 0.9644 0.9718 0.9572
0.2877 3.3551 2100 0.1888 0.9483 0.9662 0.9682 0.9643
0.2688 3.5150 2200 0.2472 0.9307 0.9536 0.9817 0.9271
0.3482 3.6749 2300 0.1953 0.9476 0.9659 0.9644 0.9675
0.3906 3.8349 2400 0.1884 0.9458 0.9644 0.9734 0.9555
0.2702 3.9948 2500 0.1887 0.9505 0.9678 0.9672 0.9684
0.2212 4.1535 2600 0.2325 0.9429 0.9623 0.9759 0.9490
0.2978 4.3135 2700 0.2086 0.9453 0.9641 0.9723 0.9561
0.2364 4.4734 2800 0.2204 0.9471 0.9653 0.9729 0.9578
0.1388 4.6333 2900 0.2203 0.9462 0.9647 0.9720 0.9575
0.3332 4.7933 3000 0.2168 0.9476 0.9657 0.9707 0.9607
0.2389 4.9532 3100 0.2194 0.9447 0.9637 0.9717 0.9558

Framework versions

  • Transformers 5.2.0
  • Pytorch 2.9.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
9
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SasmithaLochana/xlm_r_idiom_classifier

Finetuned
(3843)
this model

Dataset used to train SasmithaLochana/xlm_r_idiom_classifier