--- library_name: transformers language: - en base_model: Hartunka/bert_base_rand_10_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: bert_base_rand_10_v2_qqp results: - task: name: Text Classification type: text-classification dataset: name: GLUE QQP type: glue args: qqp metrics: - name: Accuracy type: accuracy value: 0.8202572347266881 - name: F1 type: f1 value: 0.7589318294907945 --- # bert_base_rand_10_v2_qqp This model is a fine-tuned version of [Hartunka/bert_base_rand_10_v2](https://huggingface.co/Hartunka/bert_base_rand_10_v2) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3871 - Accuracy: 0.8203 - F1: 0.7589 - Combined Score: 0.7896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.4757 | 1.0 | 1422 | 0.4355 | 0.7913 | 0.6756 | 0.7335 | | 0.3701 | 2.0 | 2844 | 0.3871 | 0.8203 | 0.7589 | 0.7896 | | 0.294 | 3.0 | 4266 | 0.3957 | 0.8242 | 0.7747 | 0.7995 | | 0.2331 | 4.0 | 5688 | 0.4476 | 0.8343 | 0.7689 | 0.8016 | | 0.1845 | 5.0 | 7110 | 0.4730 | 0.8396 | 0.7799 | 0.8098 | | 0.1496 | 6.0 | 8532 | 0.4950 | 0.8421 | 0.7814 | 0.8118 | | 0.1215 | 7.0 | 9954 | 0.6163 | 0.8422 | 0.7848 | 0.8135 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1