File size: 3,569 Bytes
6429944
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BioLinkBERT-Large-LitCovid-1.4
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# BioLinkBERT-Large-LitCovid-1.4

This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5976
- Hamming loss: 0.0604
- F1 micro: 0.6804
- F1 macro: 0.5425
- F1 weighted: 0.7357
- F1 samples: 0.6807
- Precision micro: 0.5509
- Precision macro: 0.4271
- Precision weighted: 0.6552
- Precision samples: 0.5921
- Recall micro: 0.8895
- Recall macro: 0.8221
- Recall weighted: 0.8895
- Recall samples: 0.9063
- Roc Auc: 0.9165
- Accuracy: 0.1370

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:|
| 0.5869        | 1.0   | 1151 | 0.5737          | 0.0978       | 0.5682   | 0.4375   | 0.6759      | 0.5754     | 0.4172          | 0.3269          | 0.5906             | 0.4591            | 0.8901       | 0.8593       | 0.8901          | 0.9076         | 0.8966  | 0.0421   |
| 0.4636        | 2.0   | 2302 | 0.5316          | 0.0805       | 0.6179   | 0.4702   | 0.7052      | 0.6237     | 0.4704          | 0.3554          | 0.6181             | 0.5153            | 0.9005       | 0.8611       | 0.9005          | 0.9160         | 0.9107  | 0.0812   |
| 0.3782        | 3.0   | 3453 | 0.5382          | 0.0760       | 0.6321   | 0.4929   | 0.7146      | 0.6327     | 0.4864          | 0.3757          | 0.6293             | 0.5230            | 0.9027       | 0.8556       | 0.9027          | 0.9183         | 0.9142  | 0.0797   |
| 0.3031        | 4.0   | 4605 | 0.5807          | 0.0619       | 0.6754   | 0.5346   | 0.7343      | 0.6744     | 0.5437          | 0.4189          | 0.6531             | 0.5820            | 0.8915       | 0.8274       | 0.8915          | 0.9089         | 0.9166  | 0.1235   |
| 0.2625        | 5.0   | 5755 | 0.5976          | 0.0604       | 0.6804   | 0.5425   | 0.7357      | 0.6807     | 0.5509          | 0.4271          | 0.6552             | 0.5921            | 0.8895       | 0.8221       | 0.8895          | 0.9063         | 0.9165  | 0.1370   |


### Framework versions

- Transformers 4.28.0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.13.3