rbelanec's picture
End of training
b9f3fa8 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prompt-tuning
  - generated_from_trainer
model-index:
  - name: train_cola_1757340185
    results: []

train_cola_1757340185

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cola dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1891
  • Num Input Tokens Seen: 3669168

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1682 0.5 962 0.2292 184192
0.2053 1.0 1924 0.2663 367320
0.3625 1.5 2886 0.2419 550840
0.3409 2.0 3848 0.1891 734600
0.242 2.5 4810 0.2103 918600
0.4443 3.0 5772 0.2274 1101216
0.2546 3.5 6734 0.1962 1284288
0.3322 4.0 7696 0.2311 1468552
0.1172 4.5 8658 0.2418 1651528
0.0905 5.0 9620 0.2520 1834816
0.2706 5.5 10582 0.2517 2018016
0.1555 6.0 11544 0.2436 2201584
0.2961 6.5 12506 0.2524 2385200
0.0537 7.0 13468 0.2558 2568288
0.2939 7.5 14430 0.2553 2751584
0.1063 8.0 15392 0.2559 2935056
0.1724 8.5 16354 0.2551 3118000
0.0641 9.0 17316 0.2560 3301760
0.05 9.5 18278 0.2536 3485344
0.4162 10.0 19240 0.2559 3669168

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1