train_wsc_101112_1760347667
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the wsc dataset. It achieves the following results on the evaluation set:
- Loss: 0.3457
- Num Input Tokens Seen: 488816
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 0.451 | 0.504 | 63 | 0.5548 | 24608 |
| 0.349 | 1.008 | 126 | 0.3484 | 49296 |
| 0.3646 | 1.512 | 189 | 0.3629 | 74672 |
| 0.3553 | 2.016 | 252 | 0.3527 | 98816 |
| 0.4132 | 2.52 | 315 | 0.3512 | 123680 |
| 0.3561 | 3.024 | 378 | 0.3512 | 147776 |
| 0.3345 | 3.528 | 441 | 0.3568 | 173312 |
| 0.3828 | 4.032 | 504 | 0.3574 | 197728 |
| 0.3481 | 4.536 | 567 | 0.3481 | 222560 |
| 0.3524 | 5.04 | 630 | 0.3506 | 246848 |
| 0.3619 | 5.5440 | 693 | 0.3484 | 271008 |
| 0.3418 | 6.048 | 756 | 0.3473 | 295984 |
| 0.354 | 6.552 | 819 | 0.3495 | 320080 |
| 0.348 | 7.056 | 882 | 0.3514 | 345136 |
| 0.3481 | 7.5600 | 945 | 0.3505 | 370416 |
| 0.3579 | 8.064 | 1008 | 0.3484 | 394688 |
| 0.3466 | 8.568 | 1071 | 0.3484 | 418880 |
| 0.3358 | 9.072 | 1134 | 0.3500 | 444304 |
| 0.3486 | 9.576 | 1197 | 0.3457 | 469328 |
Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rbelanec/train_wsc_101112_1760347667
Base model
meta-llama/Meta-Llama-3-8B-Instruct