whatsapp-group-classifierv3
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4325
- Accuracy: 0.8454
- Precision: 0.8680
- Recall: 0.8522
- F1: 0.8589
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|---|---|---|---|---|---|---|---|
| 0.9598 | 1.0 | 513 | 0.6553 | 0.7366 | 0.7724 | 0.7280 | 0.7280 |
| 0.6517 | 2.0 | 1026 | 0.5718 | 0.7751 | 0.7888 | 0.7802 | 0.7817 |
| 0.5707 | 3.0 | 1539 | 0.5134 | 0.8024 | 0.8214 | 0.8059 | 0.8115 |
| 0.5246 | 4.0 | 2052 | 0.4874 | 0.8146 | 0.8376 | 0.8151 | 0.8243 |
| 0.4773 | 5.0 | 2565 | 0.4717 | 0.8215 | 0.8417 | 0.8295 | 0.8344 |
| 0.4512 | 6.0 | 3078 | 0.4586 | 0.8244 | 0.8465 | 0.8330 | 0.8389 |
| 0.4496 | 7.0 | 3591 | 0.4534 | 0.8332 | 0.8538 | 0.8380 | 0.8450 |
| 0.4164 | 8.0 | 4104 | 0.4432 | 0.8366 | 0.8615 | 0.8412 | 0.8501 |
| 0.4184 | 9.0 | 4617 | 0.4396 | 0.8356 | 0.8601 | 0.8407 | 0.8493 |
| 0.4075 | 10.0 | 5130 | 0.4346 | 0.8332 | 0.8563 | 0.8418 | 0.8480 |
| 0.3923 | 11.0 | 5643 | 0.4329 | 0.8395 | 0.8614 | 0.8453 | 0.8519 |
| 0.3886 | 12.0 | 6156 | 0.4367 | 0.8390 | 0.8623 | 0.8450 | 0.8525 |
| 0.3792 | 13.0 | 6669 | 0.4248 | 0.8390 | 0.8621 | 0.8431 | 0.8512 |
| 0.3659 | 14.0 | 7182 | 0.4252 | 0.84 | 0.8624 | 0.8483 | 0.8539 |
| 0.3738 | 15.0 | 7695 | 0.4236 | 0.8376 | 0.8606 | 0.8445 | 0.8515 |
| 0.3502 | 16.0 | 8208 | 0.4308 | 0.8444 | 0.8651 | 0.8521 | 0.8575 |
| 0.3574 | 17.0 | 8721 | 0.4292 | 0.8439 | 0.8658 | 0.8504 | 0.8572 |
| 0.3521 | 18.0 | 9234 | 0.4266 | 0.8449 | 0.8667 | 0.8512 | 0.8581 |
| 0.3306 | 19.0 | 9747 | 0.4247 | 0.8415 | 0.8666 | 0.8450 | 0.8543 |
| 0.3581 | 20.0 | 10260 | 0.4316 | 0.8449 | 0.8692 | 0.8531 | 0.8596 |
| 0.3223 | 21.0 | 10773 | 0.4342 | 0.8483 | 0.8713 | 0.8565 | 0.8624 |
| 0.3235 | 22.0 | 11286 | 0.4270 | 0.8473 | 0.8712 | 0.8551 | 0.8616 |
| 0.3281 | 23.0 | 11799 | 0.4263 | 0.8449 | 0.8686 | 0.8496 | 0.8579 |
| 0.313 | 24.0 | 12312 | 0.4319 | 0.8463 | 0.8677 | 0.8538 | 0.8598 |
| 0.32 | 25.0 | 12825 | 0.4305 | 0.8468 | 0.8681 | 0.8539 | 0.8600 |
| 0.3104 | 26.0 | 13338 | 0.4280 | 0.8488 | 0.8708 | 0.8543 | 0.8616 |
| 0.3203 | 27.0 | 13851 | 0.4309 | 0.8473 | 0.8698 | 0.8547 | 0.8611 |
| 0.304 | 28.0 | 14364 | 0.4298 | 0.8444 | 0.8671 | 0.8494 | 0.8572 |
| 0.3057 | 29.0 | 14877 | 0.4358 | 0.8498 | 0.8720 | 0.8575 | 0.8634 |
| 0.3238 | 30.0 | 15390 | 0.4325 | 0.8454 | 0.8680 | 0.8522 | 0.8589 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for DTempo/whatsapp-group-classifierv3
Base model
google-bert/bert-base-uncased