train_piqa_1754741701

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the piqa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1320
  • Num Input Tokens Seen: 22103448

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1248 0.5 1813 0.1588 1118368
0.0805 1.0 3626 0.1470 2216600
0.0542 1.5 5439 0.1503 3320792
0.087 2.0 7252 0.1482 4419000
0.1423 2.5 9065 0.1463 5525176
0.1749 3.0 10878 0.1625 6628280
0.1449 3.5 12691 0.1320 7736376
0.2093 4.0 14504 0.1529 8844408
0.2112 4.5 16317 0.1506 9951832
0.0171 5.0 18130 0.1470 11048200
0.2367 5.5 19943 0.1649 12157032
0.1281 6.0 21756 0.1827 13257624
0.0894 6.5 23569 0.1733 14360952
0.1768 7.0 25382 0.1844 15468632
0.1353 7.5 27195 0.1813 16574840
0.2293 8.0 29008 0.1806 17678024
0.2512 8.5 30821 0.1802 18780040
0.1342 9.0 32634 0.1810 19894712
0.086 9.5 34447 0.1813 21014840
0.0014 10.0 36260 0.1800 22103448

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_piqa_1754741701

Adapter
(2107)
this model

Evaluation results