rbelanec's picture
End of training
fef178d verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prefix-tuning
  - generated_from_trainer
model-index:
  - name: train_math_qa_1754652176
    results: []

train_math_qa_1754652176

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the math_qa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8026
  • Num Input Tokens Seen: 38732208

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.9385 0.5 3357 0.8721 1938656
0.8336 1.0 6714 0.8212 3870688
0.8288 1.5 10071 0.8165 5801696
0.8502 2.0 13428 0.8096 7741288
0.7843 2.5 16785 0.8062 9678632
0.8009 3.0 20142 0.8065 11611120
0.8214 3.5 23499 0.8087 13550032
0.7798 4.0 26856 0.8033 15489040
0.7908 4.5 30213 0.8038 17420720
0.813 5.0 33570 0.8034 19360624
0.7837 5.5 36927 0.8044 21295472
0.8133 6.0 40284 0.8047 23230504
0.7966 6.5 43641 0.8026 25166536
0.8272 7.0 46998 0.8034 27107328
0.7807 7.5 50355 0.8030 29044256
0.7946 8.0 53712 0.8029 30985288
0.8 8.5 57069 0.8026 32925544
0.8165 9.0 60426 0.8028 34856680
0.7526 9.5 63783 0.8028 36790952
0.7855 10.0 67140 0.8029 38732208

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1