train_siqa_1754507486

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the siqa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2486
  • Num Input Tokens Seen: 29840264

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.3673 0.5 3759 0.3175 1495072
0.0729 1.0 7518 0.2866 2984720
0.3034 1.5 11277 0.2782 4477104
0.3572 2.0 15036 0.2689 5970384
0.2673 2.5 18795 0.2486 7462384
0.0773 3.0 22554 0.2718 8954176
0.0328 3.5 26313 0.2737 10445088
0.398 4.0 30072 0.2752 11937344
0.2919 4.5 33831 0.2819 13430048
0.3342 5.0 37590 0.2992 14920992
0.0416 5.5 41349 0.2832 16412032
0.2025 6.0 45108 0.2761 17904680
0.3087 6.5 48867 0.2822 19397416
0.5182 7.0 52626 0.2834 20888856
0.3096 7.5 56385 0.2855 22381080
0.7383 8.0 60144 0.2841 23872880
0.2775 8.5 63903 0.2843 25363344
0.249 9.0 67662 0.2833 26855848
0.575 9.5 71421 0.2843 28348712
0.7715 10.0 75180 0.2835 29840264

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_siqa_1754507486

Adapter
(2100)
this model

Evaluation results