rbelanec's picture
End of training
c4ccfca verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct
  - llama-factory
  - transformers
pipeline_tag: text-generation
model-index:
  - name: train_copa_789_1760637872
    results: []

train_copa_789_1760637872

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the copa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6325
  • Num Input Tokens Seen: 501888

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 789
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2428 2.0 160 0.2389 50208
0.2334 4.0 320 0.2317 100384
0.2085 6.0 480 0.2517 150400
0.2214 8.0 640 0.2480 200640
0.2534 10.0 800 0.2512 250848
0.1347 12.0 960 0.3083 301088
0.0685 14.0 1120 0.4576 351424
0.0203 16.0 1280 0.5270 401504
0.0132 18.0 1440 0.6124 451648
0.0032 20.0 1600 0.6325 501888

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4