rbelanec's picture
End of training
ac2325d verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct
  - llama-factory
  - transformers
pipeline_tag: text-generation
model-index:
  - name: train_cb_789_1760637870
    results: []

train_cb_789_1760637870

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0068
  • Num Input Tokens Seen: 711112

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 789
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.998 1.0 57 1.1530 35448
1.1776 2.0 114 1.1269 70496
1.1544 3.0 171 1.1184 106416
1.2161 4.0 228 1.0682 142480
0.8968 5.0 285 1.0623 177224
1.0029 6.0 342 1.0563 212000
0.8655 7.0 399 1.0370 248272
0.9519 8.0 456 1.0199 284248
1.0435 9.0 513 1.0256 319488
0.9225 10.0 570 1.0201 354472
0.9044 11.0 627 1.0167 389408
0.9074 12.0 684 1.0151 425328
0.8735 13.0 741 1.0169 461216
0.9917 14.0 798 1.0128 496704
0.8208 15.0 855 1.0204 532504
0.8281 16.0 912 1.0124 567952
1.0021 17.0 969 1.0095 603760
1.0184 18.0 1026 1.0096 639784
0.7952 19.0 1083 1.0226 675800
0.8799 20.0 1140 1.0068 711112

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4