--- license: mit base_model: gpt2 tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: gpt2-text-classification-v2 results: [] --- [Visualize in Weights & Biases](https://wandb.ai/date3k2/gpt2-text-classification/runs/52an6tu1) [Visualize in Weights & Biases](https://wandb.ai/date3k2/gpt2-text-classification/runs/lu5o1szk) # gpt2-text-classification-v2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2002 - Accuracy: 0.9342 - F1: 0.9340 - Recall: 0.9314 - Precision: 0.9367 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall | |:-------------:|:------:|:----:|:--------:|:------:|:---------------:|:---------:|:------:| | 0.327 | 0.9974 | 260 | 0.8973 | 0.8929 | 0.2559 | 0.9333 | 0.8558 | | 0.241 | 1.9987 | 521 | 0.919 | 0.9180 | 0.2039 | 0.9296 | 0.9066 | | 0.244 | 3.0 | 782 | 0.9154 | 0.9192 | 0.2156 | 0.8799 | 0.9621 | | 0.1843 | 3.9974 | 1042 | 0.9299 | 0.9288 | 0.1888 | 0.9427 | 0.9154 | | 0.1608 | 4.9987 | 1303 | 0.9301 | 0.9291 | 0.1855 | 0.9428 | 0.9158 | | 0.124 | 6.0 | 1564 | 0.9322 | 0.9319 | 0.1826 | 0.9357 | 0.9282 | | 0.112 | 6.9974 | 1820 | 0.2099 | 0.9315 | 0.9303 | 0.9138 | 0.9473 | | 0.0903 | 7.9987 | 2081 | 0.2002 | 0.9342 | 0.9340 | 0.9314 | 0.9367 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1