rbelanec's picture
End of training
d0cc9fb verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prefix-tuning
  - generated_from_trainer
model-index:
  - name: train_codealpacapy_1755694512
    results: []

train_codealpacapy_1755694512

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the codealpacapy dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8759
  • Num Input Tokens Seen: 10232192

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.4217 0.5001 1908 0.5447 508608
0.5661 1.0003 3816 0.6061 1023840
0.4662 1.5004 5724 0.5115 1534400
0.5013 2.0005 7632 0.4961 2047464
0.6815 2.5007 9540 0.5051 2563624
0.3892 3.0008 11448 0.4983 3068424
0.3807 3.5009 13356 0.5102 3579432
0.5023 4.0010 15264 0.5103 4090736
0.3171 4.5012 17172 0.5280 4604416
0.3007 5.0013 19080 0.5387 5114800
0.3973 5.5014 20988 0.5853 5619904
0.2143 6.0016 22896 0.5941 6137320
0.2133 6.5017 24804 0.6379 6637864
0.1318 7.0018 26712 0.6575 7159688
0.2348 7.5020 28620 0.7354 7672088
0.3334 8.0021 30528 0.7432 8185712
0.1042 8.5022 32436 0.8241 8700416
0.1284 9.0024 34344 0.8331 9210648
0.1177 9.5025 36252 0.8773 9716920

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1