train_qnli_456_1760637862

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the qnli dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3754
  • Num Input Tokens Seen: 207225024

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 456
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.041 1.0 23567 0.0440 10354304
0.0332 2.0 47134 0.0410 20707072
0.0364 3.0 70701 0.0374 31068416
0.0237 4.0 94268 0.0365 41429120
0.0145 5.0 117835 0.0367 51792992
0.0154 6.0 141402 0.0384 62154656
0.0066 7.0 164969 0.0387 72517024
0.0111 8.0 188536 0.0413 82880000
0.0078 9.0 212103 0.0388 93239936
0.0149 10.0 235670 0.0395 103606752
0.0248 11.0 259237 0.0459 113970336
0.0343 12.0 282804 0.0456 124330144
0.0025 13.0 306371 0.0483 134690080
0.0008 14.0 329938 0.0555 145051648
0.0014 15.0 353505 0.0632 155411232
0.0016 16.0 377072 0.0653 165771456
0.0011 17.0 400639 0.0711 176136224
0.0035 18.0 424206 0.0743 186502496
0.0017 19.0 447773 0.0794 196862944
0.002 20.0 471340 0.0804 207225024

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_qnli_456_1760637862

Adapter
(2188)
this model