Optimized Models
Collection
Distilation, Quantization • 4 items • Updated
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 1.0 | 318 | 3.1980 | 0.7271 |
| 3.7268 | 2.0 | 636 | 1.5994 | 0.8587 |
| 3.7268 | 3.0 | 954 | 0.8034 | 0.9142 |
| 1.3745 | 4.0 | 1272 | 0.4826 | 0.9290 |
| 0.4514 | 5.0 | 1590 | 0.3451 | 0.9387 |
| 0.4514 | 6.0 | 1908 | 0.2988 | 0.9406 |
| 0.1826 | 7.0 | 2226 | 0.2627 | 0.9452 |
| 0.0937 | 8.0 | 2544 | 0.2589 | 0.9448 |
| 0.0937 | 9.0 | 2862 | 0.2531 | 0.9439 |
| 0.0618 | 10.0 | 3180 | 0.2532 | 0.9448 |
Base model
distilbert/distilbert-base-uncased