| | --- |
| | license: apache-2.0 |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - accuracy |
| | model-index: |
| | - name: BioLinkBERT-LitCovid-v1.2.2 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # BioLinkBERT-LitCovid-v1.2.2 |
| |
|
| | This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.2409 |
| | - F1 micro: 0.9209 |
| | - F1 macro: 0.8813 |
| | - F1 weighted: 0.9216 |
| | - F1 samples: 0.9216 |
| | - Precision micro: 0.8926 |
| | - Precision macro: 0.8430 |
| | - Precision weighted: 0.8949 |
| | - Precision samples: 0.9138 |
| | - Recall micro: 0.9510 |
| | - Recall macro: 0.9272 |
| | - Recall weighted: 0.9510 |
| | - Recall samples: 0.9564 |
| | - Roc Auc: 0.9622 |
| | - Accuracy: 0.7805 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 2e-05 |
| | - train_batch_size: 16 |
| | - eval_batch_size: 16 |
| | - seed: 42 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - num_epochs: 3 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy | |
| | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:| |
| | | 0.2394 | 1.0 | 2183 | 0.2237 | 0.9040 | 0.8670 | 0.9056 | 0.9069 | 0.8548 | 0.8161 | 0.8601 | 0.8857 | 0.9592 | 0.9364 | 0.9592 | 0.9624 | 0.9607 | 0.7319 | |
| | | 0.1798 | 2.0 | 4366 | 0.2275 | 0.9171 | 0.8758 | 0.9182 | 0.9191 | 0.8855 | 0.8336 | 0.8888 | 0.9097 | 0.9510 | 0.9288 | 0.9510 | 0.9571 | 0.9612 | 0.7705 | |
| | | 0.1408 | 3.0 | 6549 | 0.2409 | 0.9209 | 0.8813 | 0.9216 | 0.9216 | 0.8926 | 0.8430 | 0.8949 | 0.9138 | 0.9510 | 0.9272 | 0.9510 | 0.9564 | 0.9622 | 0.7805 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.28.0 |
| | - Pytorch 2.0.0 |
| | - Datasets 2.1.0 |
| | - Tokenizers 0.13.3 |
| |
|