distilbert-finetuned-custom-ner
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set:
- Loss: 0.1651
- Precision: 0.8404
- Recall: 0.9293
- F1: 0.8826
- Accuracy: 0.9681
- Loc Precision: 0.9325
- Loc Recall: 0.9472
- Loc F1: 0.9398
- Misc Precision: 0.5968
- Misc Recall: 0.8460
- Misc F1: 0.6999
- Org Precision: 0.7967
- Org Recall: 0.9031
- Org F1: 0.8466
- Per Precision: 0.9537
- Per Recall: 0.9723
- Per F1: 0.9629
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Loc Precision | Loc Recall | Loc F1 | Misc Precision | Misc Recall | Misc F1 | Org Precision | Org Recall | Org F1 | Per Precision | Per Recall | Per F1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.0697 | 1.0 | 878 | 0.1253 | 0.8373 | 0.9189 | 0.8762 | 0.9688 | 0.8974 | 0.9526 | 0.9242 | 0.6546 | 0.8037 | 0.7215 | 0.7633 | 0.8874 | 0.8207 | 0.9463 | 0.9658 | 0.9559 |
| 0.0482 | 2.0 | 1756 | 0.1437 | 0.8306 | 0.9196 | 0.8728 | 0.9667 | 0.9266 | 0.9271 | 0.9268 | 0.5929 | 0.8308 | 0.6920 | 0.7662 | 0.9016 | 0.8284 | 0.9551 | 0.9696 | 0.9623 |
| 0.0346 | 3.0 | 2634 | 0.1537 | 0.8284 | 0.9231 | 0.8732 | 0.9668 | 0.9238 | 0.9439 | 0.9338 | 0.5748 | 0.8416 | 0.6831 | 0.7911 | 0.8784 | 0.8325 | 0.9433 | 0.9756 | 0.9592 |
| 0.0248 | 4.0 | 3512 | 0.1447 | 0.8473 | 0.9251 | 0.8845 | 0.9702 | 0.9267 | 0.9428 | 0.9347 | 0.6182 | 0.8482 | 0.7151 | 0.8092 | 0.8919 | 0.8485 | 0.9526 | 0.9701 | 0.9613 |
| 0.0166 | 5.0 | 4390 | 0.1612 | 0.8405 | 0.9275 | 0.8818 | 0.9675 | 0.9330 | 0.9407 | 0.9368 | 0.5894 | 0.8471 | 0.6951 | 0.8036 | 0.9031 | 0.8504 | 0.9562 | 0.9723 | 0.9642 |
| 0.0113 | 6.0 | 5268 | 0.1651 | 0.8404 | 0.9293 | 0.8826 | 0.9681 | 0.9325 | 0.9472 | 0.9398 | 0.5968 | 0.8460 | 0.6999 | 0.7967 | 0.9031 | 0.8466 | 0.9537 | 0.9723 | 0.9629 |
Framework versions
- Transformers 4.53.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
- Downloads last month
- 6
Model tree for vishnu-vizz/distilbert-finetuned-custom-ner
Base model
distilbert/distilbert-base-uncasedDataset used to train vishnu-vizz/distilbert-finetuned-custom-ner
Evaluation results
- Precision on conll2003validation set self-reported0.840
- Recall on conll2003validation set self-reported0.929
- F1 on conll2003validation set self-reported0.883
- Accuracy on conll2003validation set self-reported0.968