rbelanec's picture
End of training
119cadc verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prefix-tuning
  - generated_from_trainer
model-index:
  - name: train_conala_1754652182
    results: []

train_conala_1754652182

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the conala dataset. It achieves the following results on the evaluation set:

  • Loss: 6.1469
  • Num Input Tokens Seen: 1524216

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
10.6497 0.5 268 10.7962 75936
8.9279 1.0 536 8.6633 152672
7.7268 1.5 804 7.6969 229344
7.3138 2.0 1072 7.1510 305288
6.8308 2.5 1340 6.8473 382120
6.5213 3.0 1608 6.6602 457952
6.1458 3.5 1876 6.5309 534688
6.4748 4.0 2144 6.4322 610944
6.0796 4.5 2412 6.3570 687328
6.3133 5.0 2680 6.3077 762440
5.5232 5.5 2948 6.2614 839656
5.3684 6.0 3216 6.2249 914920
5.9655 6.5 3484 6.2010 992104
6.3153 7.0 3752 6.1782 1067520
6.4198 7.5 4020 6.1645 1142912
5.8816 8.0 4288 6.1579 1220200
5.8843 8.5 4556 6.1506 1295720
6.5707 9.0 4824 6.1477 1372560
5.8709 9.5 5092 6.1469 1447376
5.5584 10.0 5360 6.1489 1524216

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1