train_qnli_1756735870

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the qnli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1076
  • Num Input Tokens Seen: 94426336

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1101 0.5 23567 0.0919 4726160
0.0048 1.0 47134 0.0505 9443856
0.1224 1.5 70701 0.0457 14171744
0.0074 2.0 94268 0.0379 18885616
0.0206 2.5 117835 0.0410 23594480
0.0022 3.0 141402 0.0483 28322288
0.0055 3.5 164969 0.0407 33044192
0.1087 4.0 188536 0.0372 37765424
0.1327 4.5 212103 0.0544 42484624
0.0309 5.0 235670 0.0420 47208432
0.0031 5.5 259237 0.0485 51927872
0.0863 6.0 282804 0.0511 56653952
0.0032 6.5 306371 0.0703 61379616
0.0021 7.0 329938 0.0588 66100352
0.0002 7.5 353505 0.0728 70821872
0.0012 8.0 377072 0.0659 75542800
0.114 8.5 400639 0.0899 80264848
0.0 9.0 424206 0.0903 84986304
0.002 9.5 447773 0.1065 89703232
0.0002 10.0 471340 0.1076 94426336

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_qnli_1756735870

Adapter
(2397)
this model