train_wsc_1755694498
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the wsc dataset. It achieves the following results on the evaluation set:
- Loss: 0.3509
- Num Input Tokens Seen: 437760
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 0.4134 | 0.5020 | 125 | 0.9847 | 22304 |
| 0.4292 | 1.0040 | 250 | 0.3975 | 44064 |
| 0.3916 | 1.5060 | 375 | 0.3906 | 65808 |
| 0.3158 | 2.0080 | 500 | 0.3806 | 88048 |
| 0.3942 | 2.5100 | 625 | 0.3658 | 109696 |
| 0.3565 | 3.0120 | 750 | 0.3480 | 131872 |
| 0.3885 | 3.5141 | 875 | 0.3620 | 154416 |
| 0.3387 | 4.0161 | 1000 | 0.3514 | 176048 |
| 0.3332 | 4.5181 | 1125 | 0.3515 | 198432 |
| 0.3669 | 5.0201 | 1250 | 0.3565 | 219680 |
| 0.3469 | 5.5221 | 1375 | 0.3494 | 241136 |
| 0.3545 | 6.0241 | 1500 | 0.3506 | 263616 |
| 0.3451 | 6.5261 | 1625 | 0.3497 | 285424 |
| 0.324 | 7.0281 | 1750 | 0.3610 | 307792 |
| 0.3183 | 7.5301 | 1875 | 0.3650 | 329840 |
| 0.3382 | 8.0321 | 2000 | 0.3508 | 351552 |
| 0.3475 | 8.5341 | 2125 | 0.3498 | 373424 |
| 0.3608 | 9.0361 | 2250 | 0.3510 | 395616 |
| 0.3417 | 9.5382 | 2375 | 0.3496 | 417520 |
Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rbelanec/train_wsc_1755694498
Base model
meta-llama/Meta-Llama-3-8B-Instruct