sofia-todeschini commited on
Commit
eb2168c
·
1 Parent(s): a872318

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ model-index:
8
+ - name: BioLinkBERT-LitCovid-v1.3.1
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # BioLinkBERT-LitCovid-v1.3.1
16
+
17
+ This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.6883
20
+ - Hamming loss: 0.0171
21
+ - F1 micro: 0.8542
22
+ - F1 macro: 0.3828
23
+ - F1 weighted: 0.8818
24
+ - F1 samples: 0.8804
25
+ - Precision micro: 0.7855
26
+ - Precision macro: 0.3067
27
+ - Precision weighted: 0.8407
28
+ - Precision samples: 0.8641
29
+ - Recall micro: 0.9360
30
+ - Recall macro: 0.7145
31
+ - Recall weighted: 0.9360
32
+ - Recall samples: 0.9459
33
+ - Roc Auc: 0.9607
34
+ - Accuracy: 0.6896
35
+
36
+ ## Model description
37
+
38
+ More information needed
39
+
40
+ ## Intended uses & limitations
41
+
42
+ More information needed
43
+
44
+ ## Training and evaluation data
45
+
46
+ More information needed
47
+
48
+ ## Training procedure
49
+
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 2e-05
54
+ - train_batch_size: 16
55
+ - eval_batch_size: 16
56
+ - seed: 42
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: linear
59
+ - num_epochs: 5
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
64
+ |:-------------:|:-----:|:-----:|:---------------:|:------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:|
65
+ | 1.0638 | 1.0 | 2272 | 0.4414 | 0.0398 | 0.7141 | 0.2594 | 0.8318 | 0.8178 | 0.5807 | 0.2077 | 0.7729 | 0.7843 | 0.9269 | 0.8062 | 0.9269 | 0.9422 | 0.9445 | 0.5545 |
66
+ | 0.8571 | 2.0 | 4544 | 0.4364 | 0.0230 | 0.8122 | 0.3367 | 0.8645 | 0.8517 | 0.7236 | 0.2666 | 0.8255 | 0.8284 | 0.9254 | 0.7835 | 0.9254 | 0.9396 | 0.9527 | 0.6211 |
67
+ | 0.6709 | 3.0 | 6816 | 0.4827 | 0.0218 | 0.8222 | 0.3405 | 0.8723 | 0.8638 | 0.7297 | 0.2708 | 0.8239 | 0.8381 | 0.9415 | 0.7770 | 0.9415 | 0.9513 | 0.9609 | 0.6488 |
68
+ | 0.5093 | 4.0 | 9088 | 0.5695 | 0.0184 | 0.8457 | 0.3795 | 0.8781 | 0.8753 | 0.7692 | 0.3006 | 0.8333 | 0.8556 | 0.9390 | 0.7605 | 0.9390 | 0.9482 | 0.9615 | 0.6760 |
69
+ | 0.2957 | 5.0 | 11360 | 0.6883 | 0.0171 | 0.8542 | 0.3828 | 0.8818 | 0.8804 | 0.7855 | 0.3067 | 0.8407 | 0.8641 | 0.9360 | 0.7145 | 0.9360 | 0.9459 | 0.9607 | 0.6896 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.28.0
75
+ - Pytorch 2.1.0+cu118
76
+ - Datasets 2.14.6
77
+ - Tokenizers 0.13.3