train_svamp_1757340275
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:
- Loss: 0.2841
- Num Input Tokens Seen: 704272
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 2.3949 | 0.5 | 79 | 2.3833 | 35296 |
| 1.9797 | 1.0 | 158 | 1.8895 | 70400 |
| 1.5462 | 1.5 | 237 | 1.5126 | 106208 |
| 1.1385 | 2.0 | 316 | 1.1513 | 140736 |
| 0.734 | 2.5 | 395 | 0.8587 | 176064 |
| 0.5419 | 3.0 | 474 | 0.6560 | 211024 |
| 0.3904 | 3.5 | 553 | 0.5181 | 246128 |
| 0.3267 | 4.0 | 632 | 0.4333 | 281616 |
| 0.368 | 4.5 | 711 | 0.3808 | 316976 |
| 0.2456 | 5.0 | 790 | 0.3472 | 352256 |
| 0.2224 | 5.5 | 869 | 0.3273 | 387360 |
| 0.1667 | 6.0 | 948 | 0.3125 | 422464 |
| 0.1728 | 6.5 | 1027 | 0.3022 | 457760 |
| 0.1274 | 7.0 | 1106 | 0.2953 | 492912 |
| 0.1583 | 7.5 | 1185 | 0.2896 | 528336 |
| 0.134 | 8.0 | 1264 | 0.2862 | 563600 |
| 0.1712 | 8.5 | 1343 | 0.2843 | 598992 |
| 0.1468 | 9.0 | 1422 | 0.2843 | 633984 |
| 0.1135 | 9.5 | 1501 | 0.2850 | 669152 |
| 0.1658 | 10.0 | 1580 | 0.2841 | 704272 |
Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rbelanec/train_svamp_1757340275
Base model
meta-llama/Meta-Llama-3-8B-Instruct