train_rte_1753094153

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the rte dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0824
  • Num Input Tokens Seen: 3481336

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.387 0.5009 281 0.2907 176032
0.1348 1.0018 562 0.1241 349200
0.117 1.5027 843 0.1047 524208
0.0708 2.0036 1124 0.0999 699264
0.0739 2.5045 1405 0.0971 873600
0.0656 3.0053 1686 0.0924 1048184
0.105 3.5062 1967 0.0896 1223864
0.041 4.0071 2248 0.0880 1397624
0.0602 4.5080 2529 0.0871 1570936
0.102 5.0089 2810 0.0842 1746384
0.0558 5.5098 3091 0.0852 1922384
0.0511 6.0107 3372 0.0827 2092320
0.0312 6.5116 3653 0.0834 2267520
0.031 7.0125 3934 0.0845 2441688
0.0529 7.5134 4215 0.0831 2614936
0.0485 8.0143 4496 0.0827 2790832
0.0298 8.5152 4777 0.0824 2963888
0.0473 9.0160 5058 0.0828 3137352
0.0689 9.5169 5339 0.0824 3312648

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_rte_1753094153

Adapter
(2105)
this model

Evaluation results