--- library_name: transformers license: mit base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: BiomedBERT-AC-LF-Classification results: [] datasets: - surrey-nlp/PLOD-CW-25 language: - en --- # BiomedBERT-AC-LF-Classification This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on the PLOD-CW-25 dataset. It achieves the following results on the evaluation set: - Loss: 0.2703 - Precision: 0.7821 - Recall: 0.8686 - F1: 0.8231 - Accuracy: 0.9204 It achieves the following results on the test set: - Loss: 0.1384 - Precision: 0.8473 - Recall: 0.9281 - F1: 0.8858 - Accuracy: 0.9529 ## Model description This model is a fine-tuned model, designed to detect abbreviations and long forms in biomedical text. Abbreviations and long forms are tagged in the BIO format, with the following labels, B-AC, B-LF, I-LF and O. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3341 | 1.0 | 125 | 0.2485 | 0.7727 | 0.8477 | 0.8084 | 0.9111 | | 0.1633 | 2.0 | 250 | 0.2525 | 0.7767 | 0.8673 | 0.8195 | 0.9174 | | 0.1293 | 3.0 | 375 | 0.2224 | 0.7855 | 0.8501 | 0.8165 | 0.9211 | | 0.1081 | 4.0 | 500 | 0.2600 | 0.7780 | 0.8784 | 0.8252 | 0.9201 | | 0.0938 | 5.0 | 625 | 0.2703 | 0.7821 | 0.8686 | 0.8231 | 0.9204 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1