train_cb_1757340191 / README.md
rbelanec's picture
End of training
a19a188 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prompt-tuning
  - generated_from_trainer
model-index:
  - name: train_cb_1757340191
    results: []

train_cb_1757340191

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1360
  • Num Input Tokens Seen: 367864

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.8637 0.5088 29 0.5433 20064
0.2282 1.0175 58 0.2346 37832
0.1669 1.5263 87 0.2458 57288
0.0698 2.0351 116 0.1708 74520
0.6367 2.5439 145 0.1745 93080
0.1197 3.0526 174 0.1617 111928
0.2465 3.5614 203 0.1476 131160
0.1638 4.0702 232 0.1464 150056
0.0107 4.5789 261 0.1360 167208
0.0856 5.0877 290 0.1542 186160
0.0062 5.5965 319 0.1584 206000
0.1789 6.1053 348 0.1645 224064
0.0373 6.6140 377 0.1678 243840
0.0026 7.1228 406 0.1604 261504
0.1333 7.6316 435 0.1662 280352
0.0041 8.1404 464 0.1668 299344
0.2603 8.6491 493 0.1665 318672
0.0191 9.1579 522 0.1650 337480
0.0748 9.6667 551 0.1643 356456

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1