train_boolq_42_1760786040

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the boolq dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1164
  • Num Input Tokens Seen: 42773120

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2686 1.0 2121 0.1823 2135488
0.1263 2.0 4242 0.1516 4271424
0.093 3.0 6363 0.1374 6407520
0.3305 4.0 8484 0.1328 8553728
0.1152 5.0 10605 0.1278 10692704
0.065 6.0 12726 0.1221 12829472
0.0645 7.0 14847 0.1195 14967104
0.1854 8.0 16968 0.1182 17105760
0.0566 9.0 19089 0.1164 19246048
0.1086 10.0 21210 0.1167 21382880
0.0962 11.0 23331 0.1168 23522528
0.0945 12.0 25452 0.1173 25662176
0.1126 13.0 27573 0.1182 27797760
0.056 14.0 29694 0.1183 29933184
0.0714 15.0 31815 0.1180 32075552
0.0758 16.0 33936 0.1177 34216384
0.1073 17.0 36057 0.1188 36358080
0.0444 18.0 38178 0.1186 38498720
0.2689 19.0 40299 0.1189 40635680
0.0255 20.0 42420 0.1189 42773120

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_boolq_42_1760786040

Adapter
(2105)
this model

Evaluation results