rbelanec's picture
End of training
0cb7a6e verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - lora
  - generated_from_trainer
model-index:
  - name: train_codealpacapy_456_1765305198
    results: []

train_codealpacapy_456_1765305198

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the codealpacapy dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4399
  • Num Input Tokens Seen: 24973864

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 456
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.4909 1.0 1908 0.4533 1246832
0.5107 2.0 3816 0.4463 2497936
0.4271 3.0 5724 0.4399 3743760
0.3778 4.0 7632 0.4473 4991472
0.9067 5.0 9540 0.4627 6239608
0.2917 6.0 11448 0.4996 7485248
0.4214 7.0 13356 0.5415 8733024
0.3429 8.0 15264 0.5936 9983720
0.1361 9.0 17172 0.6715 11229792
0.1287 10.0 19080 0.7588 12476552
0.0863 11.0 20988 0.8549 13725560
0.0963 12.0 22896 0.9736 14977976
0.0586 13.0 24804 1.0952 16225896
0.0351 14.0 26712 1.2337 17477224
0.013 15.0 28620 1.3411 18726216
0.0066 16.0 30528 1.4384 19973408
0.0078 17.0 32436 1.5158 21226656
0.0046 18.0 34344 1.5550 22472696
0.0027 19.0 36252 1.5828 23722376
0.0136 20.0 38160 1.5947 24973864

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1