train_mrpc_123_1760637677

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the mrpc dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2207
  • Num Input Tokens Seen: 6774288

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1245 1.0 826 0.1316 339568
0.0588 2.0 1652 0.1152 678688
0.0395 3.0 2478 0.1178 1017368
0.0951 4.0 3304 0.1022 1356744
0.1357 5.0 4130 0.1014 1694912
0.0448 6.0 4956 0.1049 2033992
0.0546 7.0 5782 0.1071 2372464
0.0754 8.0 6608 0.1117 2710624
0.0958 9.0 7434 0.1077 3049392
0.0379 10.0 8260 0.1249 3388032
0.0314 11.0 9086 0.1146 3727312
0.0474 12.0 9912 0.1567 4065504
0.0135 13.0 10738 0.1912 4404624
0.0749 14.0 11564 0.1934 4743080
0.0007 15.0 12390 0.2621 5082240
0.0004 16.0 13216 0.2641 5420840
0.0004 17.0 14042 0.2825 5759384
0.0002 18.0 14868 0.2862 6097784
0.0003 19.0 15694 0.2887 6435544
0.0023 20.0 16520 0.2904 6774288

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_mrpc_123_1760637677

Adapter
(2104)
this model

Evaluation results