train_svamp_789_1757596135

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0822
  • Num Input Tokens Seen: 704320

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 789
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1544 0.5 79 0.1777 35392
0.0298 1.0 158 0.1001 70288
0.0636 1.5 237 0.1024 105936
0.0264 2.0 316 0.0828 140896
0.0241 2.5 395 0.0822 175840
0.0188 3.0 474 0.0888 211504
0.001 3.5 553 0.0880 246864
0.0035 4.0 632 0.0973 281664
0.0018 4.5 711 0.0901 317152
0.0029 5.0 790 0.1132 352048
0.0018 5.5 869 0.1072 387600
0.0 6.0 948 0.1005 422400
0.0001 6.5 1027 0.1113 457792
0.0004 7.0 1106 0.1146 492720
0.0 7.5 1185 0.1173 528336
0.0 8.0 1264 0.1171 563312
0.0001 8.5 1343 0.1185 598800
0.0 9.0 1422 0.1188 633968
0.0 9.5 1501 0.1180 669456
0.0001 10.0 1580 0.1172 704320

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_789_1757596135

Adapter
(2122)
this model

Evaluation results