train_cb_1757340265 / README.md
rbelanec's picture
End of training
f5eb03e verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prompt-tuning
  - generated_from_trainer
model-index:
  - name: train_cb_1757340265
    results: []

train_cb_1757340265

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1202
  • Num Input Tokens Seen: 359824

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.7269 0.5088 29 0.7703 19872
0.2065 1.0175 58 0.1823 36432
0.1101 1.5263 87 0.1472 53680
0.1284 2.0351 116 0.1234 72160
0.0319 2.5439 145 0.1202 91904
0.4179 3.0526 174 0.1394 108856
0.21 3.5614 203 0.1364 128056
0.0244 4.0702 232 0.1916 146952
0.1028 4.5789 261 0.1853 165128
0.0302 5.0877 290 0.2104 183224
0.0024 5.5965 319 0.1862 202424
0.0953 6.1053 348 0.2177 220000
0.0689 6.6140 377 0.2197 238272
0.2106 7.1228 406 0.2411 255984
0.006 7.6316 435 0.2398 275536
0.0119 8.1404 464 0.2447 293296
0.0882 8.6491 493 0.2382 312304
0.0044 9.1579 522 0.2459 329216
0.0294 9.6667 551 0.2478 346944

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1