contemmcm's picture
End of training
d87e991 verified
metadata
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-large-uncased-whole-word-masking
tags:
  - generated_from_trainer
model-index:
  - name: 35ea3f9d06d0517cdcbbd091b3aeadbe
    results: []

35ea3f9d06d0517cdcbbd091b3aeadbe

This model is a fine-tuned version of google-bert/bert-large-uncased-whole-word-masking on the nyu-mll/glue [stsb] dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5116
  • Data Size: 1.0
  • Epoch Runtime: 20.7935
  • Mse: 0.5116
  • Mae: 0.5382
  • R2: 0.7711

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Data Size Epoch Runtime Mse Mae R2
No log 0 0 5.3002 0 1.6842 5.3013 1.8977 -1.3715
No log 1 179 2.7334 0.0078 2.2262 2.7344 1.4062 -0.2232
No log 2 358 2.4529 0.0156 2.4359 2.4537 1.2919 -0.0976
No log 3 537 2.1676 0.0312 3.1648 2.1684 1.2449 0.0300
No log 4 716 1.6133 0.0625 4.4236 1.6139 1.0401 0.2781
No log 5 895 1.1087 0.125 5.9818 1.1091 0.8395 0.5039
0.1153 6 1074 1.1612 0.25 9.4758 1.1617 0.9112 0.4803
0.9415 7 1253 0.8926 0.5 12.2021 0.8931 0.7784 0.6005
1.518 8.0 1432 1.2209 1.0 21.0060 1.2215 0.9047 0.4536
0.5417 9.0 1611 0.6361 1.0 20.9045 0.6363 0.5943 0.7154
0.6933 10.0 1790 1.1218 1.0 20.5770 1.1225 0.8493 0.4979
0.43 11.0 1969 0.5633 1.0 20.3664 0.5637 0.5944 0.7478
0.4021 12.0 2148 0.6050 1.0 20.4252 0.6050 0.5840 0.7294
0.3134 13.0 2327 0.4976 1.0 20.3349 0.4978 0.5487 0.7773
0.2535 14.0 2506 0.5506 1.0 20.7725 0.5507 0.5635 0.7536
0.2374 15.0 2685 0.4726 1.0 20.3050 0.4727 0.5148 0.7885
0.2145 16.0 2864 0.5204 1.0 20.3881 0.5205 0.5441 0.7672
0.1836 17.0 3043 0.5648 1.0 20.5899 0.5650 0.5594 0.7472
0.1813 18.0 3222 0.4885 1.0 20.2269 0.4887 0.5275 0.7814
0.2288 19.0 3401 0.5116 1.0 20.7935 0.5116 0.5382 0.7711

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.3.0
  • Tokenizers 0.22.1