rbelanec's picture
End of training
0b0ea98 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prompt-tuning
  - generated_from_trainer
model-index:
  - name: train_record_1753169792
    results: []

train_record_1753169792

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the record dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5004
  • Num Input Tokens Seen: 464483424

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.989 0.5 15621 1.1066 23227520
1.0018 1.0 31242 0.8499 46454112
0.941 1.5 46863 0.7834 69694624
0.5588 2.0 62484 0.7003 92908288
0.5479 2.5 78105 0.6578 116099296
0.4998 3.0 93726 0.6246 139351808
0.6389 3.5 109347 0.5987 162566976
0.5937 4.0 124968 0.5751 185790304
0.5114 4.5 140589 0.5538 208997696
0.5271 5.0 156210 0.5381 232243968
0.4891 5.5 171831 0.5266 255458112
0.4292 6.0 187452 0.5183 278686752
0.6003 6.5 203073 0.5125 301925344
0.5286 7.0 218694 0.5078 325137568
0.5457 7.5 234315 0.5048 348361920
0.5228 8.0 249936 0.5027 371592704
0.4472 8.5 265557 0.5015 394838368
0.446 9.0 281178 0.5006 418033696
0.5693 9.5 296799 0.5004 441282560
0.5027 10.0 312420 0.5004 464483424

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1