train_cb_101112_1760637985

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1378
  • Num Input Tokens Seen: 723584

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
1.2004 1.0 57 1.1030 36112
0.657 2.0 114 0.5552 71552
0.3455 3.0 171 0.1739 108088
0.2157 4.0 228 0.1595 144720
0.151 5.0 285 0.1609 181120
0.1168 6.0 342 0.1584 217128
0.0967 7.0 399 0.1508 253536
0.0816 8.0 456 0.1463 290112
0.0819 9.0 513 0.1481 325872
0.0791 10.0 570 0.1456 361920
0.1327 11.0 627 0.1422 398432
0.0426 12.0 684 0.1420 435536
0.0717 13.0 741 0.1432 471520
0.1513 14.0 798 0.1378 507256
0.2993 15.0 855 0.1397 543064
0.2543 16.0 912 0.1395 579704
0.1008 17.0 969 0.1394 615960
0.0189 18.0 1026 0.1400 652368
0.0258 19.0 1083 0.1389 687976
0.1754 20.0 1140 0.1403 723584

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_cb_101112_1760637985

Adapter
(2100)
this model

Evaluation results