metadata
tags:
- generated_from_trainer
model-index:
- name: GPT2-705M
results: []
GPT2-705M
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 4.3316
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 12
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 5.8964 | 1.0 | 69 | 5.8372 |
| 5.2016 | 2.0 | 138 | 5.0017 |
| 4.4098 | 3.0 | 207 | 4.6658 |
| 4.2459 | 4.0 | 276 | 4.5260 |
| 3.9837 | 5.0 | 345 | 4.4107 |
| 3.8526 | 6.0 | 414 | 4.3741 |
| 3.5545 | 7.0 | 483 | 4.3328 |
| 3.392 | 8.0 | 552 | 4.3175 |
| 3.3396 | 9.0 | 621 | 4.3236 |
| 3.0426 | 10.0 | 690 | 4.3322 |
| 3.028 | 11.0 | 759 | 4.3254 |
| 3.0344 | 12.0 | 828 | 4.3316 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0