rbelanec's picture
End of training
20ee9f2 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct
  - llama-factory
  - transformers
pipeline_tag: text-generation
model-index:
  - name: train_cb_123_1760637636
    results: []

train_cb_123_1760637636

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5569
  • Num Input Tokens Seen: 669472

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.4028 2.0 100 0.2820 67296
0.3518 4.0 200 0.2934 133504
0.2005 6.0 300 0.3301 200480
0.25 8.0 400 0.2831 267072
0.1551 10.0 500 0.3385 334784
0.1724 12.0 600 0.4800 402080
0.0008 14.0 700 0.5080 467968
0.0039 16.0 800 0.5616 534752
0.0008 18.0 900 0.5532 601952
0.0008 20.0 1000 0.5569 669472

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4