--- library_name: transformers license: mit base_model: microsoft/DialoGPT-small tags: - generated_from_trainer metrics: - accuracy model-index: - name: dialochess-v3 results: [] --- # dialochess-v3 This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8843 - Accuracy: 0.0002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.3128 | 0.1616 | 200 | 1.1914 | 0.0002 | | 1.1935 | 0.3231 | 400 | 1.0974 | 0.0002 | | 1.1181 | 0.4847 | 600 | 1.0419 | 0.0 | | 1.0778 | 0.6462 | 800 | 1.0080 | 0.0 | | 1.0426 | 0.8078 | 1000 | 0.9828 | 0.0002 | | 1.0185 | 0.9693 | 1200 | 0.9612 | 0.0002 | | 1.0075 | 1.1309 | 1400 | 0.9458 | 0.0001 | | 0.9765 | 1.2924 | 1600 | 0.9348 | 0.0002 | | 0.9806 | 1.4540 | 1800 | 0.9248 | 0.0001 | | 0.9542 | 1.6155 | 2000 | 0.9132 | 0.0002 | | 0.9684 | 1.7771 | 2200 | 0.9059 | 0.0002 | | 0.9525 | 1.9386 | 2400 | 0.9015 | 0.0002 | | 0.9396 | 2.1002 | 2600 | 0.8960 | 0.0002 | | 0.9342 | 2.2617 | 2800 | 0.8896 | 0.0002 | | 0.9327 | 2.4233 | 3000 | 0.8874 | 0.0002 | | 0.9344 | 2.5848 | 3200 | 0.8848 | 0.0002 | | 0.9272 | 2.7464 | 3400 | 0.8848 | 0.0002 | | 0.9288 | 2.9079 | 3600 | 0.8843 | 0.0002 | ### Framework versions - Transformers 4.57.2 - Pytorch 2.9.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.1