train_stsb_456_1760637812
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the stsb dataset. It achieves the following results on the evaluation set:
- Loss: 0.4204
- Num Input Tokens Seen: 8714656
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 456
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 0.4181 | 1.0 | 1294 | 0.4885 | 435104 |
| 0.4233 | 2.0 | 2588 | 0.4859 | 870112 |
| 0.3698 | 3.0 | 3882 | 0.4204 | 1305024 |
| 0.4104 | 4.0 | 5176 | 0.4411 | 1742048 |
| 0.3322 | 5.0 | 6470 | 0.4568 | 2176672 |
| 0.2642 | 6.0 | 7764 | 0.5372 | 2613648 |
| 0.3637 | 7.0 | 9058 | 0.5623 | 3049776 |
| 0.187 | 8.0 | 10352 | 0.6460 | 3486928 |
| 0.1922 | 9.0 | 11646 | 0.6720 | 3924192 |
| 0.0937 | 10.0 | 12940 | 0.8131 | 4360736 |
| 0.1833 | 11.0 | 14234 | 0.9352 | 4793520 |
| 0.0705 | 12.0 | 15528 | 1.0894 | 5230528 |
| 0.0947 | 13.0 | 16822 | 1.2369 | 5664848 |
| 0.0335 | 14.0 | 18116 | 1.4716 | 6100288 |
| 0.0609 | 15.0 | 19410 | 1.7229 | 6534240 |
| 0.0017 | 16.0 | 20704 | 1.9238 | 6969936 |
| 0.0002 | 17.0 | 21998 | 2.0110 | 7405056 |
| 0.0004 | 18.0 | 23292 | 2.0653 | 7842624 |
| 0.0004 | 19.0 | 24586 | 2.0840 | 8279952 |
| 0.0954 | 20.0 | 25880 | 2.0865 | 8714656 |
Framework versions
- PEFT 0.17.1
- Transformers 4.51.3
- Pytorch 2.9.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 5
Model tree for rbelanec/train_stsb_456_1760637812
Base model
meta-llama/Meta-Llama-3-8B-Instruct