train_stsb_456_1760637810
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the stsb dataset. It achieves the following results on the evaluation set:
- Loss: 0.4231
- Num Input Tokens Seen: 8714656
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 4
- eval_batch_size: 4
- seed: 456
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 0.4531 | 1.0 | 1294 | 0.4877 | 435104 |
| 0.4176 | 2.0 | 2588 | 0.5283 | 870112 |
| 0.3965 | 3.0 | 3882 | 0.4282 | 1305024 |
| 0.4611 | 4.0 | 5176 | 0.4385 | 1742048 |
| 0.3841 | 5.0 | 6470 | 0.4231 | 2176672 |
| 0.4076 | 6.0 | 7764 | 0.4294 | 2613648 |
| 0.4302 | 7.0 | 9058 | 0.4266 | 3049776 |
| 0.3942 | 8.0 | 10352 | 0.4260 | 3486928 |
| 0.2993 | 9.0 | 11646 | 0.4296 | 3924192 |
| 0.2585 | 10.0 | 12940 | 0.4426 | 4360736 |
| 0.3664 | 11.0 | 14234 | 0.4630 | 4793520 |
| 0.2829 | 12.0 | 15528 | 0.4643 | 5230528 |
| 0.2711 | 13.0 | 16822 | 0.5225 | 5664848 |
| 0.2636 | 14.0 | 18116 | 0.5670 | 6100288 |
| 0.1918 | 15.0 | 19410 | 0.6538 | 6534240 |
| 0.1662 | 16.0 | 20704 | 0.7959 | 6969936 |
| 0.173 | 17.0 | 21998 | 0.9833 | 7405056 |
| 0.0825 | 18.0 | 23292 | 1.0842 | 7842624 |
| 0.1425 | 19.0 | 24586 | 1.1006 | 8279952 |
| 0.2022 | 20.0 | 25880 | 1.0974 | 8714656 |
Framework versions
- PEFT 0.17.1
- Transformers 4.51.3
- Pytorch 2.9.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 5
Model tree for rbelanec/train_stsb_456_1760637810
Base model
meta-llama/Meta-Llama-3-8B-Instruct