--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0616 - Precision: 0.9319 - Recall: 0.9488 - F1: 0.9403 - Accuracy: 0.9856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.1503 | 264 | 0.1354 | 0.7782 | 0.8544 | 0.8145 | 0.9625 | | 0.2679 | 0.3007 | 528 | 0.0971 | 0.8526 | 0.9005 | 0.8759 | 0.9736 | | 0.2679 | 0.4510 | 792 | 0.0887 | 0.8900 | 0.9222 | 0.9059 | 0.9781 | | 0.105 | 0.6014 | 1056 | 0.0809 | 0.9094 | 0.9278 | 0.9185 | 0.9804 | | 0.105 | 0.7517 | 1320 | 0.0714 | 0.9137 | 0.9342 | 0.9239 | 0.9812 | | 0.0748 | 0.9021 | 1584 | 0.0645 | 0.9181 | 0.9377 | 0.9278 | 0.9836 | | 0.0748 | 1.0524 | 1848 | 0.0735 | 0.9173 | 0.9392 | 0.9282 | 0.9825 | | 0.0634 | 1.2027 | 2112 | 0.0692 | 0.9129 | 0.9389 | 0.9257 | 0.9826 | | 0.0634 | 1.3531 | 2376 | 0.0691 | 0.9297 | 0.9478 | 0.9387 | 0.9851 | | 0.0428 | 1.5034 | 2640 | 0.0660 | 0.9229 | 0.9448 | 0.9337 | 0.9844 | | 0.0428 | 1.6538 | 2904 | 0.0602 | 0.9292 | 0.9450 | 0.9370 | 0.9855 | | 0.0448 | 1.8041 | 3168 | 0.0603 | 0.9165 | 0.9461 | 0.9311 | 0.9844 | | 0.0448 | 1.9544 | 3432 | 0.0636 | 0.9311 | 0.9458 | 0.9384 | 0.9848 | | 0.0364 | 2.1048 | 3696 | 0.0686 | 0.9305 | 0.9461 | 0.9383 | 0.9853 | | 0.0364 | 2.2551 | 3960 | 0.0632 | 0.9338 | 0.9497 | 0.9417 | 0.9857 | | 0.0211 | 2.4055 | 4224 | 0.0644 | 0.9284 | 0.9450 | 0.9366 | 0.9845 | | 0.0211 | 2.5558 | 4488 | 0.0628 | 0.9331 | 0.9458 | 0.9394 | 0.9846 | | 0.0209 | 2.7062 | 4752 | 0.0602 | 0.9287 | 0.9473 | 0.9379 | 0.9853 | | 0.0221 | 2.8565 | 5016 | 0.0616 | 0.9319 | 0.9488 | 0.9403 | 0.9856 | ### Framework versions - Transformers 4.57.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.1