bert-seq-class-values-no-context

This model is a fine-tuned version of google-bert/bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3514
  • Subset Accuracy: 0.2902
  • F1 Macro: 0.3370
  • F1 Micro: 0.3898
  • Precision Macro: 0.3762
  • Recall Macro: 0.3140
  • Roc Auc: 0.7933

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 2025
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Subset Accuracy F1 Macro F1 Micro Precision Macro Recall Macro Roc Auc
0.4274 0.5002 767 0.2090 0.0 0.0 0.0 0.0 0.0 0.6092
0.1875 1.0 1534 0.1773 0.0816 0.0680 0.1479 0.2599 0.0476 0.7795
0.1682 1.5002 2301 0.1681 0.1630 0.1275 0.2611 0.2788 0.1014 0.8039
0.161 2.0 3068 0.1631 0.2076 0.1940 0.3133 0.4626 0.1538 0.8256
0.1379 2.5002 3835 0.1674 0.2572 0.2415 0.3613 0.4434 0.1922 0.8235
0.1323 3.0 4602 0.1634 0.2604 0.2566 0.3641 0.4828 0.1999 0.8349
0.1032 3.5002 5369 0.1855 0.2953 0.2878 0.3803 0.3958 0.2422 0.8211
0.0961 4.0 6136 0.1858 0.3151 0.3092 0.4045 0.4284 0.2670 0.8231
0.0737 4.5002 6903 0.2082 0.3121 0.3140 0.3941 0.3975 0.2748 0.8120
0.0651 5.0 7670 0.2108 0.3082 0.2990 0.3935 0.4146 0.2605 0.8106
0.0541 5.5002 8437 0.2241 0.2995 0.3174 0.3851 0.3851 0.2861 0.8055
0.0465 6.0 9204 0.2386 0.3039 0.3123 0.3871 0.3757 0.2779 0.8026
0.0399 6.5002 9971 0.2458 0.3020 0.3240 0.3894 0.3745 0.2979 0.8032
0.0345 7.0 10738 0.2539 0.3078 0.3288 0.4012 0.3615 0.3105 0.8039
0.0251 7.5002 11505 0.2663 0.2951 0.3301 0.3912 0.3619 0.3140 0.7993
0.0254 8.0 12272 0.2737 0.2944 0.3322 0.3920 0.3709 0.3109 0.7998
0.0189 8.5002 13039 0.2791 0.2844 0.3388 0.3984 0.3574 0.3310 0.8029
0.0195 9.0 13806 0.2838 0.2913 0.3273 0.3896 0.3615 0.3064 0.7989
0.014 9.5002 14573 0.3037 0.2925 0.3336 0.3987 0.3680 0.3201 0.7971
0.0139 10.0 15340 0.3015 0.2903 0.3401 0.3979 0.3648 0.3239 0.7950
0.0101 10.5002 16107 0.3192 0.2846 0.3428 0.4032 0.3598 0.3409 0.7934
0.0103 11.0 16874 0.3257 0.2866 0.3376 0.3989 0.3566 0.3274 0.7928
0.0073 11.5002 17641 0.3275 0.3004 0.3334 0.4008 0.3828 0.3077 0.7941
0.0074 12.0 18408 0.3378 0.2868 0.3361 0.3999 0.3646 0.3217 0.7911
0.0056 12.5002 19175 0.3424 0.3010 0.3419 0.4036 0.3733 0.3215 0.7926
0.0052 13.0 19942 0.3514 0.2902 0.3370 0.3898 0.3762 0.3140 0.7933

Framework versions

  • Transformers 4.53.2
  • Pytorch 2.6.0+cu124
  • Datasets 2.14.4
  • Tokenizers 0.21.2
Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DayCardoso/bert-seq-class-values-no-context

Finetuned
(6361)
this model