train_cola_1757340260

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cola dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1983
  • Num Input Tokens Seen: 3663392

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2372 0.5 962 0.1779 183040
0.3332 1.0 1924 0.2056 366136
0.3624 1.5 2886 0.1612 548856
0.1165 2.0 3848 0.1469 732880
0.3072 2.5 4810 0.1503 916368
0.0702 3.0 5772 0.1434 1099816
0.0694 3.5 6734 0.1615 1283208
0.0432 4.0 7696 0.1529 1465464
0.1038 4.5 8658 0.1477 1648696
0.0965 5.0 9620 0.1465 1831728
0.0509 5.5 10582 0.1608 2014288
0.0672 6.0 11544 0.1458 2198176
0.0325 6.5 12506 0.1609 2382016
0.1084 7.0 13468 0.1641 2564208
0.0944 7.5 14430 0.1775 2746960
0.0155 8.0 15392 0.1814 2930240
0.1982 8.5 16354 0.2046 3113696
0.1116 9.0 17316 0.1818 3297136
0.1125 9.5 18278 0.1877 3480592
0.0085 10.0 19240 0.1878 3663392

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_cola_1757340260

Adapter
(2103)
this model

Evaluation results