metadata
library_name: transformers
language:
- en
base_model: Hartunka/bert_base_rand_20_v2
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert_base_rand_20_v2_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.826020281968835
- name: F1
type: f1
value: 0.7738118206958647
bert_base_rand_20_v2_qqp
This model is a fine-tuned version of Hartunka/bert_base_rand_20_v2 on the GLUE QQP dataset. It achieves the following results on the evaluation set:
- Loss: 0.3937
- Accuracy: 0.8260
- F1: 0.7738
- Combined Score: 0.7999
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|---|---|---|---|---|---|---|
| 0.4751 | 1.0 | 1422 | 0.4373 | 0.7922 | 0.6774 | 0.7348 |
| 0.3713 | 2.0 | 2844 | 0.3954 | 0.8183 | 0.7541 | 0.7862 |
| 0.2943 | 3.0 | 4266 | 0.3937 | 0.8260 | 0.7738 | 0.7999 |
| 0.2317 | 4.0 | 5688 | 0.4349 | 0.8365 | 0.7744 | 0.8055 |
| 0.1827 | 5.0 | 7110 | 0.4562 | 0.8395 | 0.7758 | 0.8077 |
| 0.1456 | 6.0 | 8532 | 0.5414 | 0.8400 | 0.7782 | 0.8091 |
| 0.1186 | 7.0 | 9954 | 0.6398 | 0.8423 | 0.7852 | 0.8137 |
| 0.0962 | 8.0 | 11376 | 0.5349 | 0.8401 | 0.7878 | 0.8139 |
Framework versions
- Transformers 4.50.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.21.1