ddd314bfb7b8f41aee216bed7b21bee2

This model is a fine-tuned version of FacebookAI/xlm-roberta-large-finetuned-conll02-dutch on the contemmcm/cls_mmlu dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3898
  • Data Size: 0.25
  • Epoch Runtime: 24.4660
  • Accuracy: 0.2487
  • F1 Macro: 0.0996

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Data Size Epoch Runtime Accuracy F1 Macro
No log 0 0 1.4378 0 3.2351 0.2513 0.1312
No log 1 438 1.4304 0.0078 3.8023 0.2527 0.1008
No log 2 876 1.3882 0.0156 4.6046 0.2460 0.0999
No log 3 1314 1.3912 0.0312 6.6180 0.2533 0.1011
No log 4 1752 1.3940 0.0625 10.0974 0.2487 0.0996
0.0785 5 2190 1.3944 0.125 15.1189 0.2527 0.1008
0.1856 6 2628 1.3898 0.25 24.4660 0.2487 0.0996

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.3.0
  • Tokenizers 0.22.1
Downloads last month
-
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for contemmcm/ddd314bfb7b8f41aee216bed7b21bee2

Finetuned
(20)
this model