train_svamp_1757340249

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1192
  • Num Input Tokens Seen: 704320

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 789
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
2.025 0.5 79 2.0437 35392
1.2038 1.0 158 1.1733 70288
0.297 1.5 237 0.3298 105936
0.1033 2.0 316 0.1814 140896
0.1198 2.5 395 0.1598 175840
0.0879 3.0 474 0.1434 211504
0.0925 3.5 553 0.1374 246864
0.0641 4.0 632 0.1307 281664
0.0646 4.5 711 0.1304 317152
0.1027 5.0 790 0.1268 352048
0.1327 5.5 869 0.1245 387600
0.0613 6.0 948 0.1222 422400
0.0255 6.5 1027 0.1217 457792
0.0879 7.0 1106 0.1203 492720
0.0526 7.5 1185 0.1201 528336
0.0671 8.0 1264 0.1197 563312
0.0495 8.5 1343 0.1207 598800
0.0955 9.0 1422 0.1192 633968
0.0405 9.5 1501 0.1199 669456
0.0296 10.0 1580 0.1202 704320

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_1757340249

Adapter
(2105)
this model

Evaluation results