rbelanec's picture
End of training
a853de1 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - lora
  - generated_from_trainer
model-index:
  - name: train_cola_1757340236
    results: []

train_cola_1757340236

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cola dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1499
  • Num Input Tokens Seen: 3663512

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 789
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.3264 0.5 962 0.2922 182656
0.2569 1.0 1924 0.1577 365728
0.173 1.5 2886 0.2208 548992
0.1542 2.0 3848 0.1499 731984
0.0166 2.5 4810 0.2557 915792
0.2348 3.0 5772 0.2018 1098920
0.0149 3.5 6734 0.2522 1281640
0.004 4.0 7696 0.2412 1465464
0.0005 4.5 8658 0.2826 1649720
0.1161 5.0 9620 0.3346 1831920
0.0009 5.5 10582 0.2704 2014928
0.0008 6.0 11544 0.3713 2198176
0.1402 6.5 12506 0.3939 2381440
0.0002 7.0 13468 0.3080 2564952
0.0 7.5 14430 0.4320 2748568
0.0004 8.0 15392 0.4629 2931096
0.0 8.5 16354 0.4520 3113624
0.0 9.0 17316 0.4691 3296808
0.0 9.5 18278 0.4912 3480168
0.0071 10.0 19240 0.4918 3663512

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1