train_cb_1757340168 / README.md
rbelanec's picture
End of training
4665b8e verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - p-tuning
  - generated_from_trainer
model-index:
  - name: train_cb_1757340168
    results: []

train_cb_1757340168

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5804
  • Num Input Tokens Seen: 361992

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2601 0.5088 29 0.2370 18048
0.3128 1.0175 58 0.4424 36928
1.7629 1.5263 87 0.5561 54176
0.3378 2.0351 116 0.2020 73136
0.3303 2.5439 145 0.1614 91216
0.0453 3.0526 174 0.2379 110696
0.116 3.5614 203 0.1001 129448
0.1798 4.0702 232 0.0368 147176
0.2801 4.5789 261 0.1194 164424
0.0358 5.0877 290 0.0774 183416
0.2572 5.5965 319 0.0889 203256
0.0011 6.1053 348 0.1109 220912
0.0696 6.6140 377 0.0931 240336
0.0251 7.1228 406 0.0881 257848
0.006 7.6316 435 0.0805 276824
0.0004 8.1404 464 0.1005 294504
0.0008 8.6491 493 0.0857 313576
0.0015 9.1579 522 0.0954 332256
0.0009 9.6667 551 0.0996 350336

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1