train_mrpc_1754652142
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the mrpc dataset. It achieves the following results on the evaluation set:
- Loss: 0.1395
- Num Input Tokens Seen: 3388032
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 0.2989 | 0.5 | 413 | 0.1758 | 170336 |
| 0.1096 | 1.0 | 826 | 0.1457 | 339568 |
| 0.161 | 1.5 | 1239 | 0.1939 | 509456 |
| 0.0214 | 2.0 | 1652 | 0.1303 | 678688 |
| 0.1076 | 2.5 | 2065 | 0.1298 | 847104 |
| 0.0259 | 3.0 | 2478 | 0.1248 | 1017368 |
| 0.1086 | 3.5 | 2891 | 0.1059 | 1187704 |
| 0.1231 | 4.0 | 3304 | 0.1156 | 1356744 |
| 0.0881 | 4.5 | 3717 | 0.1146 | 1526088 |
| 0.1287 | 5.0 | 4130 | 0.1349 | 1694912 |
| 0.259 | 5.5 | 4543 | 0.1434 | 1864000 |
| 0.0429 | 6.0 | 4956 | 0.1030 | 2033992 |
| 0.1231 | 6.5 | 5369 | 0.1323 | 2203208 |
| 0.0422 | 7.0 | 5782 | 0.1177 | 2372464 |
| 0.1697 | 7.5 | 6195 | 0.1324 | 2540880 |
| 0.11 | 8.0 | 6608 | 0.1313 | 2710624 |
| 0.0609 | 8.5 | 7021 | 0.1579 | 2880128 |
| 0.1143 | 9.0 | 7434 | 0.1487 | 3049392 |
| 0.0957 | 9.5 | 7847 | 0.1515 | 3218352 |
| 0.0692 | 10.0 | 8260 | 0.1520 | 3388032 |
Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rbelanec/train_mrpc_1754652142
Base model
meta-llama/Meta-Llama-3-8B-Instruct