--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: all-base-rerun-new-loop2 results: [] --- # all-base-rerun-new-loop2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.0969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3479 | 0.29 | 500 | 5.3389 | | 5.0203 | 0.58 | 1000 | 4.9188 | | 4.6949 | 0.87 | 1500 | 4.6855 | | 4.4434 | 1.16 | 2000 | 4.5414 | | 4.2872 | 1.46 | 2500 | 4.4217 | | 4.1743 | 1.75 | 3000 | 4.3230 | | 4.0791 | 2.04 | 3500 | 4.2448 | | 3.8856 | 2.33 | 4000 | 4.2016 | | 3.8509 | 2.62 | 4500 | 4.1489 | | 3.8144 | 2.91 | 5000 | 4.0998 | | 3.6394 | 3.2 | 5500 | 4.0935 | | 3.5747 | 3.49 | 6000 | 4.0638 | | 3.5592 | 3.78 | 6500 | 4.0296 | | 3.4711 | 4.07 | 7000 | 4.0278 | | 3.3061 | 4.37 | 7500 | 4.0241 | | 3.2984 | 4.66 | 8000 | 4.0105 | | 3.2917 | 4.95 | 8500 | 3.9989 | | 3.1462 | 5.24 | 9000 | 4.0090 | | 3.1241 | 5.53 | 9500 | 4.0085 | | 3.1176 | 5.82 | 10000 | 4.0075 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3