train_rte_456_1760637784

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the rte dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1027
  • Num Input Tokens Seen: 6215968

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 456
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1312 2.0 996 0.1648 622720
0.1786 4.0 1992 0.1621 1242912
0.1139 6.0 2988 0.1923 1862848
0.0842 8.0 3984 0.1915 2487712
0.0943 10.0 4980 0.2987 3108288
0.1454 12.0 5976 0.5770 3732864
0.0009 14.0 6972 0.8435 4355328
0.0001 16.0 7968 1.0402 4976032
0.0 18.0 8964 1.0930 5594752
0.0001 20.0 9960 1.1027 6215968

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_rte_456_1760637784

Adapter
(2105)
this model

Evaluation results