train_svamp_123_1768397589

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1011
  • Num Input Tokens Seen: 688048

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
1.5757 0.5016 158 1.4646 34848
0.213 1.0032 316 0.1799 69056
0.1104 1.5048 474 0.1375 103504
0.0694 2.0063 632 0.1130 138080
0.0537 2.5079 790 0.1072 172640
0.0464 3.0095 948 0.1069 207200
0.1674 3.5111 1106 0.1090 241792
0.0144 4.0127 1264 0.1011 276176
0.0696 4.5143 1422 0.1054 310480
0.1156 5.0159 1580 0.1075 345184
0.0234 5.5175 1738 0.1137 380064
0.1311 6.0190 1896 0.1177 414384
0.0081 6.5206 2054 0.1186 448928
0.0118 7.0222 2212 0.1246 483280
0.0019 7.5238 2370 0.1259 517648
0.0526 8.0254 2528 0.1323 552192
0.018 8.5270 2686 0.1309 586528
0.0694 9.0286 2844 0.1327 621264
0.0292 9.5302 3002 0.1330 655952

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.1+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_123_1768397589

Adapter
(2400)
this model