distilbert_km_20_v2_stsb
This model is a fine-tuned version of Hartunka/distilbert_km_20_v2 on the GLUE STSB dataset. It achieves the following results on the evaluation set:
- Loss: 2.2147
- Pearson: 0.2748
- Spearmanr: 0.2596
- Combined Score: 0.2672
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|---|---|---|---|---|---|---|
| 3.0221 | 1.0 | 23 | 2.3822 | 0.1214 | 0.1157 | 0.1185 |
| 1.9542 | 2.0 | 46 | 2.4669 | 0.1764 | 0.1569 | 0.1666 |
| 1.6966 | 3.0 | 69 | 2.2147 | 0.2748 | 0.2596 | 0.2672 |
| 1.4275 | 4.0 | 92 | 2.3877 | 0.2652 | 0.2532 | 0.2592 |
| 1.0925 | 5.0 | 115 | 2.5099 | 0.3043 | 0.2961 | 0.3002 |
| 0.8164 | 6.0 | 138 | 2.4808 | 0.3246 | 0.3186 | 0.3216 |
| 0.6226 | 7.0 | 161 | 2.4486 | 0.3349 | 0.3324 | 0.3336 |
| 0.5022 | 8.0 | 184 | 2.5275 | 0.3115 | 0.3031 | 0.3073 |
Framework versions
- Transformers 4.50.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.21.1
- Downloads last month
- -
Model tree for Hartunka/distilbert_km_20_v2_stsb
Base model
Hartunka/distilbert_km_20_v2Dataset used to train Hartunka/distilbert_km_20_v2_stsb
Evaluation results
- Spearmanr on GLUE STSBself-reported0.260