train_hellaswag_42_1760637627

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the hellaswag dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0582
  • Num Input Tokens Seen: 218263888

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.092 1.0 8979 0.0680 10917120
0.0056 2.0 17958 0.0582 21836032
0.0061 3.0 26937 0.0585 32746560
0.002 4.0 35916 0.0751 43661424
0.0001 5.0 44895 0.0934 54578912
0.0001 6.0 53874 0.1115 65488016
0.0 7.0 62853 0.1196 76410304
0.0001 8.0 71832 0.1139 87327296
0.0 9.0 80811 0.1333 98229232
0.0034 10.0 89790 0.1215 109127968
0.0 11.0 98769 0.1321 120042688
0.0 12.0 107748 0.1602 130954720
0.0 13.0 116727 0.1373 141874656
0.0 14.0 125706 0.1614 152783392
0.0 15.0 134685 0.1812 163694096
0.0 16.0 143664 0.2134 174604544
0.0 17.0 152643 0.2229 185523328
0.0 18.0 161622 0.2227 196433472
0.0 19.0 170601 0.2205 207345200
0.0 20.0 179580 0.2200 218263888

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_hellaswag_42_1760637627

Adapter
(2100)
this model

Evaluation results