train_qnli_42_1773148414

This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the qnli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0501
  • Num Input Tokens Seen: 56574368

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.0721 0.25 2946 0.0879 2823680
0.0474 0.5 5892 0.0703 5652800
0.0506 0.75 8838 0.0601 8482944
0.0115 1.0 11784 0.0565 11312256
0.0551 1.25 14730 0.0539 14142784
0.0918 1.5 17676 0.0533 16969472
0.0382 1.75 20622 0.0531 19782400
0.0376 2.0 23568 0.0557 22629440
0.0149 2.25 26514 0.0528 25460032
0.0396 2.5 29460 0.0544 28284608
0.0373 2.75 32406 0.0521 31130432
0.0529 3.0 35352 0.0501 33947392
0.0446 3.25 38298 0.0527 36783040
0.0254 3.5 41244 0.0525 39604544
0.0303 3.75 44190 0.0543 42421440
0.0449 4.0 47136 0.0523 45265344
0.021 4.25 50082 0.0530 48098944
0.045 4.5 53028 0.0532 50906176
0.0709 4.75 55974 0.0529 53746240
0.0677 5.0 58920 0.0528 56574368

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.10.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_qnli_42_1773148414

Adapter
(610)
this model