gpt2-expanded-test-distilled
This model is a fine-tuned version of distilbert/distilgpt2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.0336
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25.0
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 3.4795 | 1.0 | 549 | 3.0471 |
| 3.0621 | 2.0 | 1098 | 2.9997 |
| 3.0029 | 3.0 | 1647 | 2.9767 |
| 2.9472 | 4.0 | 2196 | 2.9579 |
| 2.9013 | 5.0 | 2745 | 2.9493 |
| 2.8283 | 6.0 | 3294 | 2.9476 |
| 2.8005 | 7.0 | 3843 | 2.9485 |
| 2.7523 | 8.0 | 4392 | 2.9494 |
| 2.7257 | 9.0 | 4941 | 2.9516 |
| 2.6927 | 10.0 | 5490 | 2.9594 |
| 2.6327 | 11.0 | 6039 | 2.9631 |
| 2.6106 | 12.0 | 6588 | 2.9706 |
| 2.5743 | 13.0 | 7137 | 2.9760 |
| 2.5676 | 14.0 | 7686 | 2.9845 |
| 2.5504 | 15.0 | 8235 | 2.9869 |
| 2.5242 | 16.0 | 8784 | 2.9978 |
| 2.5097 | 17.0 | 9333 | 3.0007 |
| 2.4938 | 18.0 | 9882 | 3.0114 |
| 2.4908 | 19.0 | 10431 | 3.0178 |
| 2.4731 | 20.0 | 10980 | 3.0204 |
| 2.4503 | 21.0 | 11529 | 3.0228 |
| 2.4441 | 22.0 | 12078 | 3.0254 |
| 2.4378 | 23.0 | 12627 | 3.0287 |
| 2.4286 | 24.0 | 13176 | 3.0312 |
| 2.4216 | 25.0 | 13725 | 3.0336 |
Framework versions
- Transformers 4.57.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.1
- Downloads last month
- 1
Model tree for bestofbothworldsenjoyer/gpt2-expanded-test-distilled
Base model
distilbert/distilgpt2