train_mrpc_1753094151

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the mrpc dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0991
  • Num Input Tokens Seen: 3388032

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.117 0.5 413 0.1465 170336
0.148 1.0 826 0.1692 339568
0.1084 1.5 1239 0.1439 509456
0.0543 2.0 1652 0.0991 678688
0.0997 2.5 2065 0.1339 847104
0.007 3.0 2478 0.1466 1017368
0.0013 3.5 2891 0.1655 1187704
0.0184 4.0 3304 0.1684 1356744
0.0027 4.5 3717 0.1784 1526088
0.0006 5.0 4130 0.2186 1694912
0.1039 5.5 4543 0.2599 1864000
0.0006 6.0 4956 0.2716 2033992
0.0 6.5 5369 0.3018 2203208
0.0 7.0 5782 0.2876 2372464
0.0 7.5 6195 0.4018 2540880
0.0 8.0 6608 0.3949 2710624
0.0 8.5 7021 0.4205 2880128
0.0 9.0 7434 0.4244 3049392
0.0 9.5 7847 0.4415 3218352
0.0 10.0 8260 0.4357 3388032

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_mrpc_1753094151

Adapter
(2100)
this model

Evaluation results