rbelanec's picture
End of training
1b10d10 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prefix-tuning
  - generated_from_trainer
model-index:
  - name: train_copa_1754651789
    results: []

train_copa_1754651789

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the copa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2669
  • Num Input Tokens Seen: 281856

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
12.1391 0.5 45 11.9483 14016
8.1827 1.0 90 7.8277 28096
4.4968 1.5 135 4.2820 42144
2.6252 2.0 180 2.3994 56128
1.0636 2.5 225 0.9134 70272
0.42 3.0 270 0.4435 84352
0.3501 3.5 315 0.3621 98464
0.401 4.0 360 0.3243 112576
0.3488 4.5 405 0.3295 126624
0.3229 5.0 450 0.2785 140832
0.257 5.5 495 0.2820 154976
0.245 6.0 540 0.2875 169056
0.2576 6.5 585 0.2669 183200
0.2504 7.0 630 0.2740 197344
0.267 7.5 675 0.2765 211392
0.275 8.0 720 0.2757 225536
0.2552 8.5 765 0.2773 239680
0.2574 9.0 810 0.2751 253696
0.2546 9.5 855 0.2736 267840
0.2711 10.0 900 0.2693 281856

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1