train_cb_1757340266 / README.md
rbelanec's picture
End of training
e7bf262 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - p-tuning
  - generated_from_trainer
model-index:
  - name: train_cb_1757340266
    results: []

train_cb_1757340266

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9157
  • Num Input Tokens Seen: 359824

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.5529 0.5088 29 0.2530 19872
0.425 1.0175 58 0.3134 36432
0.1547 1.5263 87 0.2663 53680
0.1254 2.0351 116 0.1659 72160
0.0558 2.5439 145 0.0878 91904
0.354 3.0526 174 0.0978 108856
0.3631 3.5614 203 0.0429 128056
0.0179 4.0702 232 0.0784 146952
0.097 4.5789 261 0.0163 165128
0.0996 5.0877 290 0.0240 183224
0.0087 5.5965 319 0.0313 202424
0.0045 6.1053 348 0.0421 220000
0.0108 6.6140 377 0.0190 238272
0.0252 7.1228 406 0.0611 255984
0.0008 7.6316 435 0.0516 275536
0.0008 8.1404 464 0.0267 293296
0.01 8.6491 493 0.0293 312304
0.0004 9.1579 522 0.0468 329216
0.0003 9.6667 551 0.0471 346944

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1