metadata
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-wellness-classifier
results: []
bert-wellness-classifier
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0555
- Accuracy: 0.624
- Auc: 0.867
- Precision Class 0: 0.4
- Precision Class 1: 0.762
- Precision Class 2: 0.429
- Precision Class 3: 0.72
- Precision Class 4: 0.7
- Precision Class 5: 0.5
- Recall Class 0: 0.421
- Recall Class 1: 0.696
- Recall Class 2: 0.444
- Recall Class 3: 0.766
- Recall Class 4: 0.766
- Recall Class 5: 0.364
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision Class 0 | Precision Class 1 | Precision Class 2 | Precision Class 3 | Precision Class 4 | Precision Class 5 | Recall Class 0 | Recall Class 1 | Recall Class 2 | Recall Class 3 | Recall Class 4 | Recall Class 5 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.6248 | 1.0 | 62 | 1.4733 | 0.439 | 0.777 | 0.371 | 0.0 | 0.2 | 0.706 | 0.399 | 0.0 | 0.52 | 0.0 | 0.045 | 0.571 | 0.821 | 0.0 |
| 1.4241 | 2.0 | 124 | 1.3340 | 0.524 | 0.821 | 0.464 | 0.625 | 0.5 | 0.627 | 0.514 | 0.25 | 0.52 | 0.25 | 0.182 | 0.762 | 0.806 | 0.083 |
| 1.3082 | 3.0 | 186 | 1.2389 | 0.547 | 0.849 | 0.448 | 0.714 | 0.345 | 0.816 | 0.531 | 0.455 | 0.52 | 0.25 | 0.455 | 0.738 | 0.776 | 0.139 |
| 1.2177 | 4.0 | 248 | 1.1702 | 0.608 | 0.862 | 0.478 | 0.722 | 0.35 | 0.625 | 0.694 | 0.533 | 0.44 | 0.65 | 0.318 | 0.952 | 0.746 | 0.222 |
| 1.1415 | 5.0 | 310 | 1.1146 | 0.594 | 0.869 | 0.48 | 0.733 | 0.417 | 0.698 | 0.607 | 0.389 | 0.48 | 0.55 | 0.227 | 0.881 | 0.806 | 0.194 |
| 1.1024 | 6.0 | 372 | 1.0959 | 0.59 | 0.87 | 0.462 | 0.833 | 0.368 | 0.75 | 0.672 | 0.375 | 0.48 | 0.5 | 0.318 | 0.857 | 0.672 | 0.417 |
| 1.0609 | 7.0 | 434 | 1.0660 | 0.623 | 0.874 | 0.5 | 0.846 | 0.381 | 0.783 | 0.667 | 0.438 | 0.44 | 0.55 | 0.364 | 0.857 | 0.776 | 0.389 |
| 1.0444 | 8.0 | 496 | 1.0565 | 0.623 | 0.875 | 0.5 | 0.857 | 0.364 | 0.755 | 0.676 | 0.448 | 0.48 | 0.6 | 0.364 | 0.881 | 0.746 | 0.361 |
| 1.0295 | 9.0 | 558 | 1.0497 | 0.623 | 0.875 | 0.5 | 0.857 | 0.348 | 0.783 | 0.68 | 0.433 | 0.48 | 0.6 | 0.364 | 0.857 | 0.761 | 0.361 |
| 1.0067 | 10.0 | 620 | 1.0471 | 0.623 | 0.876 | 0.5 | 0.857 | 0.348 | 0.755 | 0.676 | 0.464 | 0.48 | 0.6 | 0.364 | 0.881 | 0.746 | 0.361 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0