train_openbookqa_1754652173

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the openbookqa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3450
  • Num Input Tokens Seen: 4204168

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.3055 0.5 558 0.3081 210048
0.1186 1.0 1116 0.2643 420520
0.2187 1.5 1674 0.2139 630888
0.1435 2.0 2232 0.1963 841024
0.2471 2.5 2790 0.2035 1051168
0.0652 3.0 3348 0.1994 1261304
0.0565 3.5 3906 0.2359 1472152
0.0108 4.0 4464 0.2148 1682016
0.1913 4.5 5022 0.2468 1892160
0.0014 5.0 5580 0.2971 2102920
0.1651 5.5 6138 0.2812 2311976
0.1306 6.0 6696 0.3023 2523672
0.0629 6.5 7254 0.3515 2732440
0.0013 7.0 7812 0.2992 2943688
0.0017 7.5 8370 0.3889 3153640
0.1723 8.0 8928 0.3595 3363864
0.0107 8.5 9486 0.3929 3574616
0.0002 9.0 10044 0.4029 3783840
0.0003 9.5 10602 0.4061 3994976
0.1073 10.0 11160 0.4034 4204168

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_openbookqa_1754652173

Adapter
(2102)
this model

Evaluation results