--- library_name: transformers license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer model-index: - name: gpt2-finetuned-python-purpose results: [] datasets: - flytech/python-codes-25k language: - en --- # gpt2-finetuned-python-purpose This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on [flytech/python-codes-25k](https://huggingface.co/flytech/python-codes-25k). It achieves the following results on the evaluation set: - Loss: 0.4763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0083 | 1.0 | 8233 | 0.9360 | | 0.6853 | 2.0 | 16466 | 0.7537 | | 0.4557 | 3.0 | 24699 | 0.6140 | | 0.271 | 4.0 | 32932 | 0.5133 | | 0.1585 | 5.0 | 41165 | 0.4763 | ### Framework versions - Transformers 4.57.1 - Pytorch 2.4.1+cu121 - Datasets 4.4.1 - Tokenizers 0.22.1