sofia-todeschini commited on
Commit
fccbf2c
·
1 Parent(s): b9315cc

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ model-index:
8
+ - name: Bioformer-LitCovid-v1.3.1
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # Bioformer-LitCovid-v1.3.1
16
+
17
+ This model is a fine-tuned version of [bioformers/bioformer-litcovid](https://huggingface.co/bioformers/bioformer-litcovid) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.4639
20
+ - Hamming loss: 0.0375
21
+ - F1 micro: 0.7254
22
+ - F1 macro: 0.2721
23
+ - F1 weighted: 0.8153
24
+ - F1 samples: 0.8091
25
+ - Precision micro: 0.5970
26
+ - Precision macro: 0.2139
27
+ - Precision weighted: 0.7445
28
+ - Precision samples: 0.7700
29
+ - Recall micro: 0.9243
30
+ - Recall macro: 0.7966
31
+ - Recall weighted: 0.9243
32
+ - Recall samples: 0.9342
33
+ - Roc Auc: 0.9445
34
+ - Accuracy: 0.5243
35
+
36
+ ## Model description
37
+
38
+ More information needed
39
+
40
+ ## Intended uses & limitations
41
+
42
+ More information needed
43
+
44
+ ## Training and evaluation data
45
+
46
+ More information needed
47
+
48
+ ## Training procedure
49
+
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 2e-05
54
+ - train_batch_size: 32
55
+ - eval_batch_size: 32
56
+ - seed: 42
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: linear
59
+ - num_epochs: 5
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
64
+ |:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:|
65
+ | 0.9561 | 1.0 | 1136 | 0.5778 | 0.0745 | 0.5683 | 0.2036 | 0.7263 | 0.6631 | 0.4123 | 0.1552 | 0.6235 | 0.5852 | 0.9144 | 0.7912 | 0.9144 | 0.9216 | 0.9203 | 0.2653 |
66
+ | 0.7759 | 2.0 | 2272 | 0.4875 | 0.0440 | 0.6899 | 0.2545 | 0.7872 | 0.7686 | 0.5543 | 0.1978 | 0.7076 | 0.7196 | 0.9134 | 0.7626 | 0.9134 | 0.9238 | 0.9359 | 0.4380 |
67
+ | 0.6398 | 3.0 | 3408 | 0.4722 | 0.0385 | 0.7188 | 0.2699 | 0.8005 | 0.7910 | 0.5907 | 0.2101 | 0.7250 | 0.7463 | 0.9179 | 0.7580 | 0.9179 | 0.9274 | 0.9409 | 0.4832 |
68
+ | 0.5712 | 4.0 | 4544 | 0.4652 | 0.0374 | 0.7264 | 0.2754 | 0.8096 | 0.8018 | 0.5980 | 0.2151 | 0.7347 | 0.7582 | 0.9250 | 0.7774 | 0.9250 | 0.9343 | 0.9449 | 0.5034 |
69
+ | 0.4337 | 5.0 | 5680 | 0.4639 | 0.0375 | 0.7254 | 0.2721 | 0.8153 | 0.8091 | 0.5970 | 0.2139 | 0.7445 | 0.7700 | 0.9243 | 0.7966 | 0.9243 | 0.9342 | 0.9445 | 0.5243 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.28.0
75
+ - Pytorch 2.1.0+cu118
76
+ - Datasets 2.14.6
77
+ - Tokenizers 0.13.3