--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: BioLinkBERT-LitCovid-v1.2.4 results: [] --- # BioLinkBERT-LitCovid-v1.2.4 This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2160 - F1 micro: 0.8926 - F1 macro: 0.3237 - F1 weighted: 0.9016 - F1 samples: 0.9024 - Precision micro: 0.8426 - Precision macro: 0.2736 - Precision weighted: 0.8627 - Precision samples: 0.8871 - Recall micro: 0.9490 - Recall macro: 0.4834 - Recall weighted: 0.9490 - Recall samples: 0.9544 - Roc Auc: 0.9697 - Accuracy: 0.7353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:| | 0.4454 | 1.0 | 2248 | 0.3019 | 0.8637 | 0.2988 | 0.8757 | 0.8789 | 0.7937 | 0.2500 | 0.8205 | 0.8518 | 0.9471 | 0.4390 | 0.9471 | 0.9528 | 0.9669 | 0.6618 | | 0.2453 | 2.0 | 4496 | 0.2696 | 0.8852 | 0.3387 | 0.8917 | 0.8947 | 0.8231 | 0.2862 | 0.8377 | 0.8701 | 0.9574 | 0.4723 | 0.9574 | 0.9602 | 0.9731 | 0.7056 | | 0.1271 | 3.0 | 6744 | 0.2160 | 0.8926 | 0.3237 | 0.9016 | 0.9024 | 0.8426 | 0.2736 | 0.8627 | 0.8871 | 0.9490 | 0.4834 | 0.9490 | 0.9544 | 0.9697 | 0.7353 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3