text-distilbert-predictor
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0260
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
Model description
This model is a fine-tuned version of distilbert-base-uncased for text classification. DistilBERT encoder with a linear classification head
Intended uses & limitations
Direct use: quick baseline for short English text classification (binary or multiclass, per your label map). Out-of-scope: high-stakes decisions, long-context documents without chunking, non-English domains without re-training. Limitations: Augmented data is very similar original data leading to biases and lack of diversity as originally expected.
Training and evaluation data
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|---|---|---|---|---|---|---|---|
| 0.0487 | 1.0 | 40 | 0.0263 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0084 | 2.0 | 80 | 0.0076 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0081 | 3.0 | 120 | 0.0060 | 1.0 | 1.0 | 1.0 | 1.0 |
Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
- Downloads last month
- 1
Model tree for sm-riti16/text-distilbert-predictor
Base model
distilbert/distilbert-base-uncased