all-MiniLM-L6-v2-twitter-sentiment

This model is a fine-tuned version of sentence-transformers/all-MiniLM-L6-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7018
  • Accuracy: 0.6891
  • F1: 0.6879
  • Precision: 0.6940
  • Recall: 0.6891

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
1.0307 0.0702 200 1.0141 0.435 0.2643 0.5984 0.435
0.8189 0.1403 400 0.7692 0.6685 0.6557 0.6946 0.6685
0.746 0.2105 600 0.7140 0.6845 0.6814 0.6812 0.6845
0.7113 0.2806 800 0.6845 0.701 0.7003 0.7108 0.701
0.6897 0.3508 1000 0.6715 0.703 0.7023 0.7058 0.703
0.6705 0.4209 1200 0.6571 0.7 0.7017 0.7046 0.7
0.6948 0.4911 1400 0.6541 0.7 0.7024 0.7090 0.7
0.6926 0.5612 1600 0.6452 0.7115 0.7121 0.7138 0.7115
0.6763 0.6314 1800 0.6394 0.719 0.7200 0.7216 0.719
0.6997 0.7015 2000 0.6698 0.6935 0.6858 0.7050 0.6935
0.6465 0.7717 2200 0.6327 0.716 0.7167 0.7181 0.716
0.6478 0.8418 2400 0.6339 0.7265 0.7277 0.7301 0.7265
0.6534 0.9120 2600 0.6447 0.7175 0.7191 0.7271 0.7175
0.6408 0.9821 2800 0.6184 0.725 0.7245 0.7241 0.725
0.5421 1.0523 3000 0.6390 0.7145 0.7166 0.7216 0.7145
0.5701 1.1224 3200 0.6339 0.725 0.7242 0.7242 0.725
0.5974 1.1926 3400 0.6308 0.725 0.7238 0.7237 0.725
0.5872 1.2627 3600 0.6252 0.726 0.7272 0.7299 0.726
0.5847 1.3329 3800 0.6265 0.7315 0.7329 0.7359 0.7315
0.5906 1.4030 4000 0.6354 0.7205 0.7198 0.7244 0.7205
0.5541 1.4732 4200 0.6206 0.7305 0.7309 0.7314 0.7305
0.5922 1.5433 4400 0.6214 0.7305 0.7314 0.7334 0.7305
0.5816 1.6135 4600 0.6329 0.7275 0.7290 0.7358 0.7275
0.5312 1.6836 4800 0.6438 0.719 0.7170 0.7223 0.719
0.5477 1.7538 5000 0.6297 0.727 0.7269 0.7292 0.727
0.5488 1.8239 5200 0.6314 0.7295 0.7290 0.7292 0.7295
0.5541 1.8941 5400 0.6186 0.734 0.7340 0.7342 0.734
0.587 1.9642 5600 0.6103 0.734 0.7338 0.7340 0.734
0.5011 2.0344 5800 0.6428 0.729 0.7303 0.7362 0.729
0.5241 2.1045 6000 0.6288 0.7315 0.7322 0.7331 0.7315
0.5462 2.1747 6200 0.6369 0.728 0.7251 0.7263 0.728
0.4792 2.2448 6400 0.6365 0.729 0.7306 0.7339 0.729
0.5222 2.3150 6600 0.6320 0.73 0.7295 0.7292 0.73
0.4967 2.3851 6800 0.6326 0.729 0.7288 0.7296 0.729
0.5261 2.4553 7000 0.6316 0.726 0.7250 0.7246 0.726
0.519 2.5254 7200 0.6329 0.7285 0.7293 0.7307 0.7285
0.5168 2.5956 7400 0.6279 0.7315 0.7324 0.7343 0.7315
0.4944 2.6657 7600 0.6308 0.7325 0.7322 0.7321 0.7325
0.5048 2.7359 7800 0.6424 0.73 0.7306 0.7322 0.73
0.5201 2.8060 8000 0.6342 0.734 0.7343 0.7349 0.734
0.5391 2.8762 8200 0.6314 0.7305 0.7307 0.7311 0.7305
0.5361 2.9463 8400 0.6353 0.733 0.7335 0.7342 0.733

Framework versions

  • Transformers 4.55.4
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
-
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Muhammad7777/all-MiniLM-L6-v2-twitter-sentiment

Finetuned
(717)
this model