train_boolq_42_1760747449

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the boolq dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3263
  • Num Input Tokens Seen: 42773120

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.03
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.3569 1.0 2121 0.3267 2135488
0.2674 2.0 4242 0.3314 4271424
0.3091 3.0 6363 0.3263 6407520
0.3483 4.0 8484 0.3280 8553728
0.3643 5.0 10605 0.3315 10692704
0.3169 6.0 12726 0.3281 12829472
0.3145 7.0 14847 0.3273 14967104
0.335 8.0 16968 0.3279 17105760
0.3657 9.0 19089 0.3265 19246048
0.3387 10.0 21210 0.3269 21382880
0.3507 11.0 23331 0.3278 23522528
0.3114 12.0 25452 0.3274 25662176
0.3429 13.0 27573 0.3282 27797760
0.3628 14.0 29694 0.3292 29933184
0.3117 15.0 31815 0.3278 32075552
0.2979 16.0 33936 0.3295 34216384
0.3781 17.0 36057 0.3289 36358080
0.3036 18.0 38178 0.3289 38498720
0.3053 19.0 40299 0.3292 40635680
0.3171 20.0 42420 0.3290 42773120

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_boolq_42_1760747449

Adapter
(2104)
this model

Evaluation results