biobert-base-uncased-ner
This model is a fine-tuned version of dmis-lab/biobert-v1.1 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0299
- Cases: {'precision': 0.963963963963964, 'recall': 0.9705215419501134, 'f1': 0.9672316384180792, 'number': 441}
- Country: {'precision': 0.9926062846580407, 'recall': 0.9962894248608535, 'f1': 0.9944444444444445, 'number': 539}
- Date: {'precision': 0.9637931034482758, 'recall': 0.9704861111111112, 'f1': 0.9671280276816608, 'number': 576}
- Deaths: {'precision': 0.9224376731301939, 'recall': 0.9596541786743515, 'f1': 0.9406779661016949, 'number': 347}
- Virus: {'precision': 0.9927140255009107, 'recall': 0.9927140255009107, 'f1': 0.9927140255009107, 'number': 549}
- Overall Precision: 0.9705
- Overall Recall: 0.9796
- Overall F1: 0.9750
- Overall Accuracy: 0.9923
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
| Training Loss | Epoch | Step | Validation Loss | Cases | Country | Date | Deaths | Virus | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| No log | 1.0 | 291 | 0.0329 | {'precision': 0.9712918660287081, 'recall': 0.9206349206349206, 'f1': 0.9452852153667054, 'number': 441} | {'precision': 0.988950276243094, 'recall': 0.9962894248608535, 'f1': 0.9926062846580408, 'number': 539} | {'precision': 0.9498269896193772, 'recall': 0.953125, 'f1': 0.951473136915078, 'number': 576} | {'precision': 0.9388379204892966, 'recall': 0.8847262247838616, 'f1': 0.9109792284866469, 'number': 347} | {'precision': 0.9926873857404022, 'recall': 0.9890710382513661, 'f1': 0.990875912408759, 'number': 549} | 0.9706 | 0.9551 | 0.9628 | 0.9901 |
| 0.0216 | 2.0 | 582 | 0.0336 | {'precision': 0.9527027027027027, 'recall': 0.9591836734693877, 'f1': 0.9559322033898305, 'number': 441} | {'precision': 0.9907749077490775, 'recall': 0.9962894248608535, 'f1': 0.9935245143385755, 'number': 539} | {'precision': 0.9616724738675958, 'recall': 0.9583333333333334, 'f1': 0.96, 'number': 576} | {'precision': 0.9010989010989011, 'recall': 0.9452449567723343, 'f1': 0.9226441631504924, 'number': 347} | {'precision': 0.9908759124087592, 'recall': 0.9890710382513661, 'f1': 0.9899726526891522, 'number': 549} | 0.9640 | 0.9719 | 0.9679 | 0.9907 |
| 0.0216 | 3.0 | 873 | 0.0345 | {'precision': 0.9555555555555556, 'recall': 0.9750566893424036, 'f1': 0.9652076318742986, 'number': 441} | {'precision': 0.9926062846580407, 'recall': 0.9962894248608535, 'f1': 0.9944444444444445, 'number': 539} | {'precision': 0.9536082474226805, 'recall': 0.9635416666666666, 'f1': 0.9585492227979275, 'number': 576} | {'precision': 0.9131652661064426, 'recall': 0.9394812680115274, 'f1': 0.9261363636363636, 'number': 347} | {'precision': 0.990909090909091, 'recall': 0.9927140255009107, 'f1': 0.991810737033667, 'number': 549} | 0.9649 | 0.9759 | 0.9704 | 0.9914 |
| 0.0126 | 4.0 | 1164 | 0.0292 | {'precision': 0.9682539682539683, 'recall': 0.9682539682539683, 'f1': 0.9682539682539683, 'number': 441} | {'precision': 0.9907749077490775, 'recall': 0.9962894248608535, 'f1': 0.9935245143385755, 'number': 539} | {'precision': 0.9655172413793104, 'recall': 0.9722222222222222, 'f1': 0.9688581314878894, 'number': 576} | {'precision': 0.9301675977653632, 'recall': 0.9596541786743515, 'f1': 0.9446808510638297, 'number': 347} | {'precision': 0.9927140255009107, 'recall': 0.9927140255009107, 'f1': 0.9927140255009107, 'number': 549} | 0.9725 | 0.9796 | 0.9760 | 0.9925 |
| 0.0126 | 5.0 | 1455 | 0.0299 | {'precision': 0.963963963963964, 'recall': 0.9705215419501134, 'f1': 0.9672316384180792, 'number': 441} | {'precision': 0.9926062846580407, 'recall': 0.9962894248608535, 'f1': 0.9944444444444445, 'number': 539} | {'precision': 0.9637931034482758, 'recall': 0.9704861111111112, 'f1': 0.9671280276816608, 'number': 576} | {'precision': 0.9224376731301939, 'recall': 0.9596541786743515, 'f1': 0.9406779661016949, 'number': 347} | {'precision': 0.9927140255009107, 'recall': 0.9927140255009107, 'f1': 0.9927140255009107, 'number': 549} | 0.9705 | 0.9796 | 0.9750 | 0.9923 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Model tree for nattkorat/biobert-base-uncased-ner
Base model
dmis-lab/biobert-v1.1