rbelanec's picture
End of training
fb42454 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prompt-tuning
  - generated_from_trainer
model-index:
  - name: train_cola_1757340161
    results: []

train_cola_1757340161

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cola dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1974
  • Num Input Tokens Seen: 3668584

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2082 0.5 962 0.2248 183584
0.2228 1.0 1924 0.2257 366856
0.2581 1.5 2886 0.2271 550664
0.1824 2.0 3848 0.2215 734320
0.0456 2.5 4810 0.1974 918128
0.1908 3.0 5772 0.2151 1100800
0.1309 3.5 6734 0.2010 1284064
0.0898 4.0 7696 0.2117 1467824
0.0125 4.5 8658 0.2159 1650992
0.0637 5.0 9620 0.2153 1834632
0.1687 5.5 10582 0.2167 2018408
0.0009 6.0 11544 0.2551 2202264
0.0536 6.5 12506 0.2365 2386136
0.2994 7.0 13468 0.2262 2568880
0.4543 7.5 14430 0.2324 2751696
0.1854 8.0 15392 0.2370 2935520
0.1913 8.5 16354 0.2382 3119168
0.2666 9.0 17316 0.2380 3302192
0.0615 9.5 18278 0.2377 3485264
0.2186 10.0 19240 0.2372 3668584

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1