LLM Model_quantize
Collection
4 items
•
Updated
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc |
|---|---|---|---|---|---|
| 0.5028 | 1.0 | 263 | 0.3810 | 0.818 | 0.913 |
| 0.4105 | 2.0 | 526 | 0.3386 | 0.838 | 0.931 |
| 0.3571 | 3.0 | 789 | 0.3130 | 0.853 | 0.94 |
| 0.3556 | 4.0 | 1052 | 0.3417 | 0.853 | 0.946 |
| 0.3539 | 5.0 | 1315 | 0.3438 | 0.86 | 0.948 |
| 0.3473 | 6.0 | 1578 | 0.2908 | 0.869 | 0.95 |
| 0.3341 | 7.0 | 1841 | 0.2865 | 0.878 | 0.95 |
| 0.3106 | 8.0 | 2104 | 0.2884 | 0.867 | 0.95 |
| 0.3131 | 9.0 | 2367 | 0.2833 | 0.873 | 0.952 |
| 0.3143 | 10.0 | 2630 | 0.2881 | 0.867 | 0.952 |
Base model
google-bert/bert-base-uncased