rbelanec's picture
End of training
7304a7c verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct
  - llama-factory
  - transformers
pipeline_tag: text-generation
model-index:
  - name: train_cb_456_1760637753
    results: []

train_cb_456_1760637753

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2748
  • Num Input Tokens Seen: 648352

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 456
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1956 2.0 100 0.2038 65088
0.1539 4.0 200 0.1807 128896
0.1384 6.0 300 0.1621 193888
0.2604 8.0 400 0.1874 258816
0.0017 10.0 500 0.2035 323328
0.0008 12.0 600 0.2536 388448
0.0002 14.0 700 0.2883 453984
0.0001 16.0 800 0.2698 519200
0.0001 18.0 900 0.2687 583712
0.0001 20.0 1000 0.2748 648352

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4