sofia-todeschini commited on
Commit
e1bb49f
·
1 Parent(s): d7c2d54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -27
README.md CHANGED
@@ -1,38 +1,49 @@
1
  ---
2
  license: mit
3
  ---
4
- BioLinkBERT-LitCovid-v1.0
5
- This model is a fine-tuned version of michiyasunaga/BioLinkBERT-base on an unknown dataset. It achieves the following results on the evaluation set:
6
-
7
- Loss: 0.1098
8
- F1: 0.8992
9
- Roc Auc: 0.9330
10
- Accuracy: 0.7945
11
- Model description
 
 
 
12
  More information needed
13
 
14
- Intended uses & limitations
 
15
  More information needed
16
 
17
- Training and evaluation data
 
18
  More information needed
19
 
20
- Training procedure
21
- Training hyperparameters
 
22
  The following hyperparameters were used during training:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
- learning_rate: 2e-05
25
- train_batch_size: 8
26
- eval_batch_size: 8
27
- seed: 42
28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
29
- lr_scheduler_type: linear
30
- num_epochs: 1
31
- Training results
32
- Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy
33
- 0.1172 1.0 3120 0.1098 0.8992 0.9330 0.7945
34
-
35
- Transformers 4.28.0
36
- Pytorch 2.0.1+cu118
37
- Datasets 2.12.0
38
- Tokenizers 0.13.3
 
1
  ---
2
  license: mit
3
  ---
4
+ # BioLinkBERT-LitCovid-v1.0
5
+
6
+ This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
7
+ It achieves the following results on the evaluation set:
8
+ - Loss: 0.1098
9
+ - F1: 0.8992
10
+ - Roc Auc: 0.9330
11
+ - Accuracy: 0.7945
12
+
13
+ ## Model description
14
+
15
  More information needed
16
 
17
+ ## Intended uses & limitations
18
+
19
  More information needed
20
 
21
+ ## Training and evaluation data
22
+
23
  More information needed
24
 
25
+ ## Training procedure
26
+ ### Training hyperparameters
27
+
28
  The following hyperparameters were used during training:
29
+ - learning_rate: 2e-05
30
+ - train_batch_size: 8
31
+ - eval_batch_size: 8
32
+ - seed: 42
33
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
34
+ - lr_scheduler_type: linear
35
+ - num_epochs: 1
36
+ -
37
+ ### Training results
38
+
39
+ | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
40
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
41
+ | 0.1172 | 1.0 | 3120 | 0.1098 | 0.8992 | 0.9330 | 0.7945 |
42
+
43
+
44
+ ### Framework versions
45
 
46
+ - Transformers 4.28.0
47
+ - Pytorch 2.0.1+cu118
48
+ - Datasets 2.12.0
49
+ - Tokenizers 0.13.3