Optimized Models
Collection
Distilation, Quantization • 4 items • Updated
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 1.0 | 318 | 3.3032 | 0.7171 |
| 3.8072 | 2.0 | 636 | 1.9041 | 0.8445 |
| 3.8072 | 3.0 | 954 | 1.1863 | 0.8890 |
| 1.7326 | 4.0 | 1272 | 0.8879 | 0.9123 |
| 0.9355 | 5.0 | 1590 | 0.8016 | 0.9177 |
Base model
distilbert/distilbert-base-uncased