train_cola_1757340261

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cola dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1481
  • Num Input Tokens Seen: 3663392

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.3114 0.5 962 0.1727 183040
0.2949 1.0 1924 0.2166 366136
0.1907 1.5 2886 0.1481 548856
0.2194 2.0 3848 0.1639 732880
0.1692 2.5 4810 0.1905 916368
0.0942 3.0 5772 0.1816 1099816
0.1207 3.5 6734 0.2045 1283208
0.0045 4.0 7696 0.1811 1465464
0.0025 4.5 8658 0.1919 1648696
0.0003 5.0 9620 0.2772 1831728
0.0001 5.5 10582 0.2993 2014288
0.0005 6.0 11544 0.2349 2198176
0.0002 6.5 12506 0.3193 2382016
0.0002 7.0 13468 0.3106 2564208
0.0 7.5 14430 0.4077 2746960
0.0 8.0 15392 0.4232 2930240
0.0 8.5 16354 0.4086 3113696
0.0 9.0 17316 0.4257 3297136
0.0 9.5 18278 0.4367 3480592
0.0 10.0 19240 0.4409 3663392

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_cola_1757340261

Adapter
(2103)
this model

Evaluation results