rbelanec's picture
End of training
c4c14a0 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - lora
  - generated_from_trainer
model-index:
  - name: train_siqa_1754507488
    results: []

train_siqa_1754507488

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the siqa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2014
  • Num Input Tokens Seen: 29840264

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1685 0.5 3759 0.2103 1495072
0.0643 1.0 7518 0.2110 2984720
0.1168 1.5 11277 0.2156 4477104
0.3262 2.0 15036 0.2014 5970384
0.1856 2.5 18795 0.2871 7462384
0.1235 3.0 22554 0.2563 8954176
0.0457 3.5 26313 0.4125 10445088
0.2364 4.0 30072 0.3816 11937344
0.0731 4.5 33831 0.4247 13430048
0.0 5.0 37590 0.4945 14920992
0.001 5.5 41349 0.4947 16412032
0.0001 6.0 45108 0.5627 17904680
0.0 6.5 48867 0.6525 19397416
0.0 7.0 52626 0.7320 20888856
0.0 7.5 56385 0.7387 22381080
0.0005 8.0 60144 0.9056 23872880
0.0 8.5 63903 1.0000 25363344
0.0 9.0 67662 0.9857 26855848
0.0 9.5 71421 1.0109 28348712
0.0 10.0 75180 1.0122 29840264

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1