train_svamp_101112_1760638003
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:
- Loss: 2.3973
- Num Input Tokens Seen: 1430592
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 2.465 | 1.0 | 158 | 2.4358 | 71552 |
| 2.3301 | 2.0 | 316 | 2.4296 | 142960 |
| 2.4232 | 3.0 | 474 | 2.4149 | 214432 |
| 2.3355 | 4.0 | 632 | 2.4075 | 286000 |
| 2.483 | 5.0 | 790 | 2.4046 | 357888 |
| 2.3177 | 6.0 | 948 | 2.4016 | 429456 |
| 2.2695 | 7.0 | 1106 | 2.4003 | 501136 |
| 2.2541 | 8.0 | 1264 | 2.3984 | 573104 |
| 2.5143 | 9.0 | 1422 | 2.3988 | 644752 |
| 2.4114 | 10.0 | 1580 | 2.3987 | 716192 |
| 2.4926 | 11.0 | 1738 | 2.4027 | 787200 |
| 2.338 | 12.0 | 1896 | 2.4016 | 858736 |
| 2.2447 | 13.0 | 2054 | 2.3973 | 930160 |
| 2.2965 | 14.0 | 2212 | 2.4032 | 1001792 |
| 2.4339 | 15.0 | 2370 | 2.4001 | 1073248 |
| 2.3818 | 16.0 | 2528 | 2.4013 | 1144672 |
| 2.3148 | 17.0 | 2686 | 2.3996 | 1216160 |
| 2.3345 | 18.0 | 2844 | 2.4025 | 1287728 |
| 2.4071 | 19.0 | 3002 | 2.4003 | 1359120 |
| 2.5012 | 20.0 | 3160 | 2.3987 | 1430592 |
Framework versions
- PEFT 0.17.1
- Transformers 4.51.3
- Pytorch 2.9.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- -
Model tree for rbelanec/train_svamp_101112_1760638003
Base model
meta-llama/Meta-Llama-3-8B-Instruct