train_openbookqa_1754507501

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the openbookqa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2698
  • Num Input Tokens Seen: 4204168

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1609 0.5 558 0.3989 210048
0.1553 1.0 1116 0.3019 420520
0.1877 1.5 1674 0.2930 630888
0.1598 2.0 2232 0.2751 841024
0.1687 2.5 2790 0.2808 1051168
0.2138 3.0 3348 0.2698 1261304
0.035 3.5 3906 0.2841 1472152
0.0211 4.0 4464 0.2730 1682016
0.1641 4.5 5022 0.2859 1892160
0.1816 5.0 5580 0.2932 2102920
0.1433 5.5 6138 0.3073 2311976
0.3356 6.0 6696 0.3024 2523672
0.3081 6.5 7254 0.3156 2732440
0.0016 7.0 7812 0.3136 2943688
0.2086 7.5 8370 0.3169 3153640
0.2407 8.0 8928 0.3258 3363864
0.2006 8.5 9486 0.3297 3574616
0.0247 9.0 10044 0.3266 3783840
0.0091 9.5 10602 0.3295 3994976
0.8494 10.0 11160 0.3303 4204168

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_openbookqa_1754507501

Adapter
(2188)
this model