gpt2_te_add_tokens
This model is a fine-tuned version of gpt2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.6259
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 3.8474 | 0.27 | 500 | 2.3014 |
| 2.2773 | 0.54 | 1000 | 2.0143 |
| 2.1078 | 0.81 | 1500 | 1.9107 |
| 2.0185 | 1.08 | 2000 | 1.8528 |
| 1.958 | 1.35 | 2500 | 1.8083 |
| 1.9204 | 1.62 | 3000 | 1.7738 |
| 1.8846 | 1.89 | 3500 | 1.7471 |
| 1.8342 | 2.16 | 4000 | 1.7284 |
| 1.8123 | 2.43 | 4500 | 1.7062 |
| 1.7956 | 2.7 | 5000 | 1.6830 |
| 1.7879 | 2.97 | 5500 | 1.6709 |
| 1.7341 | 3.24 | 6000 | 1.6550 |
| 1.7354 | 3.5 | 6500 | 1.6453 |
| 1.7107 | 3.77 | 7000 | 1.6381 |
| 1.7059 | 4.04 | 7500 | 1.6326 |
| 1.6719 | 4.31 | 8000 | 1.6290 |
| 1.6743 | 4.58 | 8500 | 1.6273 |
| 1.6738 | 4.85 | 9000 | 1.6259 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
- Downloads last month
- 2
Model tree for NareshSandrugu/gpt2_te_add_tokens
Base model
openai-community/gpt2