train_cb_1757340268 / README.md
rbelanec's picture
End of training
e460dd2 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - lntuning
  - generated_from_trainer
model-index:
  - name: train_cb_1757340268
    results: []

train_cb_1757340268

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1502
  • Num Input Tokens Seen: 359824

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.8304 0.5088 29 0.8175 19872
0.2874 1.0175 58 0.3097 36432
0.117 1.5263 87 0.2008 53680
0.158 2.0351 116 0.1816 72160
0.0625 2.5439 145 0.1618 91904
0.362 3.0526 174 0.1618 108856
0.2499 3.5614 203 0.1502 128056
0.0416 4.0702 232 0.1588 146952
0.0798 4.5789 261 0.1717 165128
0.0694 5.0877 290 0.1825 183224
0.009 5.5965 319 0.1751 202424
0.0798 6.1053 348 0.1801 220000
0.1092 6.6140 377 0.1765 238272
0.0968 7.1228 406 0.1833 255984
0.0135 7.6316 435 0.1948 275536
0.0669 8.1404 464 0.1933 293296
0.0877 8.6491 493 0.1893 312304
0.0715 9.1579 522 0.1936 329216
0.0497 9.6667 551 0.1898 346944

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1