train_svamp_1754652180
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:
- Loss: 1.5173
- Num Input Tokens Seen: 705184
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 10.7702 | 0.5 | 79 | 10.0541 | 35776 |
| 5.4428 | 1.0 | 158 | 4.9486 | 70672 |
| 3.5492 | 1.5 | 237 | 3.5151 | 105904 |
| 2.8757 | 2.0 | 316 | 2.7888 | 141328 |
| 2.4262 | 2.5 | 395 | 2.3200 | 176752 |
| 2.0531 | 3.0 | 474 | 2.0418 | 211808 |
| 1.8642 | 3.5 | 553 | 1.8571 | 247104 |
| 1.9868 | 4.0 | 632 | 1.7518 | 282048 |
| 1.5838 | 4.5 | 711 | 1.6791 | 317248 |
| 1.6522 | 5.0 | 790 | 1.6322 | 352592 |
| 1.8935 | 5.5 | 869 | 1.5955 | 388176 |
| 1.5731 | 6.0 | 948 | 1.5685 | 423184 |
| 1.7988 | 6.5 | 1027 | 1.5475 | 458640 |
| 1.4869 | 7.0 | 1106 | 1.5413 | 493440 |
| 1.5489 | 7.5 | 1185 | 1.5307 | 528768 |
| 1.3816 | 8.0 | 1264 | 1.5257 | 563872 |
| 1.4517 | 8.5 | 1343 | 1.5241 | 599232 |
| 1.8496 | 9.0 | 1422 | 1.5173 | 634544 |
| 1.6644 | 9.5 | 1501 | 1.5189 | 670064 |
| 1.586 | 10.0 | 1580 | 1.5196 | 705184 |
Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rbelanec/train_svamp_1754652180
Base model
meta-llama/Meta-Llama-3-8B-Instruct