Mardiyyah commited on
Commit
2bdbc2a
·
verified ·
1 Parent(s): fe8ab07

Model save

Browse files
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: BioHackathon_Lipids-tapt_grouped_llrd-LR_2e-05
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # BioHackathon_Lipids-tapt_grouped_llrd-LR_2e-05
18
+
19
+ This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.9814
22
+ - Accuracy: 0.7729
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 2e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
+ - seed: 3407
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 32
47
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_ratio: 0.06
50
+ - num_epochs: 100
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
56
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|
57
+ | 1.1166 | 1.0 | 29 | 1.0691 | 0.7638 |
58
+ | 1.1241 | 2.0 | 58 | 1.0680 | 0.7633 |
59
+ | 1.1143 | 3.0 | 87 | 1.0080 | 0.7744 |
60
+ | 1.1051 | 4.0 | 116 | 1.0305 | 0.7686 |
61
+ | 1.0955 | 5.0 | 145 | 1.0445 | 0.7660 |
62
+ | 1.0724 | 6.0 | 174 | 1.0239 | 0.7744 |
63
+ | 1.0624 | 7.0 | 203 | 1.0370 | 0.7692 |
64
+ | 1.0677 | 8.0 | 232 | 0.9991 | 0.7723 |
65
+ | 1.0773 | 9.0 | 261 | 1.0491 | 0.7634 |
66
+ | 1.0402 | 10.0 | 290 | 0.9966 | 0.7767 |
67
+ | 1.0531 | 11.0 | 319 | 1.0393 | 0.7629 |
68
+ | 1.057 | 12.0 | 348 | 0.9912 | 0.7764 |
69
+ | 1.0384 | 13.0 | 377 | 1.0113 | 0.7719 |
70
+ | 1.0446 | 14.0 | 406 | 0.9877 | 0.7779 |
71
+ | 1.0166 | 15.0 | 435 | 1.0201 | 0.7726 |
72
+ | 1.0443 | 16.0 | 464 | 1.0053 | 0.7710 |
73
+ | 1.0316 | 17.0 | 493 | 0.9932 | 0.7750 |
74
+ | 1.0414 | 18.0 | 522 | 1.0127 | 0.7756 |
75
+ | 1.0125 | 19.0 | 551 | 0.9969 | 0.7753 |
76
+ | 1.0228 | 20.0 | 580 | 0.9773 | 0.7771 |
77
+ | 1.0185 | 21.0 | 609 | 0.9927 | 0.7705 |
78
+ | 1.0141 | 22.0 | 638 | 0.9924 | 0.7717 |
79
+ | 0.9937 | 23.0 | 667 | 1.0443 | 0.7690 |
80
+ | 0.9783 | 24.0 | 696 | 1.0183 | 0.7721 |
81
+ | 0.9984 | 25.0 | 725 | 0.9907 | 0.7765 |
82
+ | 0.9976 | 26.0 | 754 | 0.9765 | 0.7751 |
83
+ | 0.9883 | 27.0 | 783 | 1.0047 | 0.7697 |
84
+ | 0.9853 | 28.0 | 812 | 1.0013 | 0.7693 |
85
+ | 0.9915 | 29.0 | 841 | 0.9874 | 0.7762 |
86
+ | 0.9716 | 30.0 | 870 | 0.9950 | 0.7758 |
87
+ | 0.9682 | 31.0 | 899 | 0.9853 | 0.7744 |
88
+ | 0.9426 | 32.0 | 928 | 0.9986 | 0.7688 |
89
+ | 0.9499 | 33.0 | 957 | 1.0205 | 0.7718 |
90
+ | 0.9462 | 34.0 | 986 | 1.0001 | 0.7735 |
91
+ | 0.943 | 35.0 | 1015 | 0.9643 | 0.7783 |
92
+ | 0.9302 | 36.0 | 1044 | 0.9802 | 0.7765 |
93
+ | 0.9462 | 37.0 | 1073 | 0.9950 | 0.7748 |
94
+ | 0.9501 | 38.0 | 1102 | 0.9890 | 0.7727 |
95
+ | 0.9463 | 39.0 | 1131 | 1.0448 | 0.7618 |
96
+ | 0.9475 | 40.0 | 1160 | 1.0014 | 0.7775 |
97
+ | 0.9476 | 41.0 | 1189 | 0.9993 | 0.7746 |
98
+ | 0.924 | 42.0 | 1218 | 1.0299 | 0.7667 |
99
+ | 0.9262 | 43.0 | 1247 | 1.0259 | 0.7698 |
100
+ | 0.9337 | 44.0 | 1276 | 1.0585 | 0.7629 |
101
+ | 0.9121 | 45.0 | 1305 | 1.0140 | 0.7732 |
102
+ | 0.9068 | 46.0 | 1334 | 1.0237 | 0.7739 |
103
+ | 0.903 | 47.0 | 1363 | 1.0173 | 0.7706 |
104
+ | 0.93 | 48.0 | 1392 | 1.0091 | 0.7681 |
105
+ | 0.9286 | 49.0 | 1421 | 0.9986 | 0.7685 |
106
+ | 0.9034 | 50.0 | 1450 | 1.0186 | 0.7724 |
107
+ | 0.8983 | 51.0 | 1479 | 0.9912 | 0.7748 |
108
+ | 0.8936 | 52.0 | 1508 | 1.0157 | 0.7693 |
109
+ | 0.9052 | 53.0 | 1537 | 0.9931 | 0.7754 |
110
+ | 0.8906 | 54.0 | 1566 | 1.0099 | 0.7707 |
111
+ | 0.8781 | 55.0 | 1595 | 0.9913 | 0.7722 |
112
+ | 0.8954 | 56.0 | 1624 | 1.0015 | 0.7755 |
113
+ | 0.8734 | 57.0 | 1653 | 0.9900 | 0.7755 |
114
+ | 0.8701 | 58.0 | 1682 | 1.0097 | 0.7706 |
115
+ | 0.8732 | 59.0 | 1711 | 0.9912 | 0.7749 |
116
+ | 0.8618 | 60.0 | 1740 | 0.9636 | 0.7798 |
117
+ | 0.872 | 61.0 | 1769 | 0.9952 | 0.7694 |
118
+ | 0.8635 | 62.0 | 1798 | 1.0329 | 0.7668 |
119
+ | 0.8654 | 63.0 | 1827 | 1.0099 | 0.7689 |
120
+ | 0.8739 | 64.0 | 1856 | 0.9887 | 0.7753 |
121
+ | 0.8657 | 65.0 | 1885 | 1.0044 | 0.7741 |
122
+ | 0.8629 | 66.0 | 1914 | 0.9935 | 0.7722 |
123
+ | 0.8645 | 67.0 | 1943 | 0.9585 | 0.7788 |
124
+ | 0.8646 | 68.0 | 1972 | 0.9810 | 0.7743 |
125
+ | 0.8605 | 69.0 | 2001 | 1.0097 | 0.7668 |
126
+ | 0.8569 | 70.0 | 2030 | 0.9732 | 0.7735 |
127
+ | 0.8636 | 71.0 | 2059 | 1.0018 | 0.7715 |
128
+ | 0.8683 | 72.0 | 2088 | 0.9725 | 0.7765 |
129
+ | 0.8818 | 73.0 | 2117 | 0.9742 | 0.7776 |
130
+ | 0.8476 | 74.0 | 2146 | 1.0173 | 0.7740 |
131
+ | 0.8457 | 75.0 | 2175 | 0.9999 | 0.7736 |
132
+ | 0.8591 | 76.0 | 2204 | 1.0032 | 0.7736 |
133
+ | 0.8641 | 77.0 | 2233 | 0.9888 | 0.7788 |
134
+ | 0.8441 | 78.0 | 2262 | 0.9899 | 0.7732 |
135
+ | 0.8493 | 79.0 | 2291 | 0.9808 | 0.7793 |
136
+ | 0.8735 | 80.0 | 2320 | 0.9472 | 0.7807 |
137
+ | 0.8508 | 81.0 | 2349 | 1.0416 | 0.7729 |
138
+ | 0.8582 | 82.0 | 2378 | 0.9904 | 0.7741 |
139
+ | 0.8453 | 83.0 | 2407 | 0.9928 | 0.7730 |
140
+ | 0.8426 | 84.0 | 2436 | 0.9761 | 0.7735 |
141
+ | 0.8628 | 85.0 | 2465 | 1.0195 | 0.7714 |
142
+ | 0.8529 | 86.0 | 2494 | 0.9920 | 0.7749 |
143
+ | 0.8533 | 87.0 | 2523 | 1.0019 | 0.7701 |
144
+ | 0.8477 | 88.0 | 2552 | 1.0178 | 0.7682 |
145
+ | 0.8328 | 89.0 | 2581 | 1.0003 | 0.7744 |
146
+ | 0.8427 | 90.0 | 2610 | 0.9917 | 0.7755 |
147
+ | 0.8473 | 91.0 | 2639 | 0.9900 | 0.7743 |
148
+ | 0.8518 | 92.0 | 2668 | 0.9994 | 0.7677 |
149
+ | 0.8307 | 93.0 | 2697 | 0.9798 | 0.7764 |
150
+ | 0.857 | 94.0 | 2726 | 0.9803 | 0.7759 |
151
+ | 0.8394 | 95.0 | 2755 | 1.0033 | 0.7707 |
152
+ | 0.845 | 96.0 | 2784 | 1.0312 | 0.7672 |
153
+ | 0.8586 | 96.5614 | 2800 | 0.9814 | 0.7729 |
154
+
155
+
156
+ ### Framework versions
157
+
158
+ - Transformers 4.48.2
159
+ - Pytorch 2.4.1+cu121
160
+ - Datasets 3.0.2
161
+ - Tokenizers 0.21.0
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
3
+ "architectures": [
4
+ "BertForMaskedLM"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.2",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
generation_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "pad_token_id": 0,
4
+ "transformers_version": "4.48.2"
5
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42a0b146434f693b3679ab59e8698d4a3c549f2059ed58e887c76b0b220b812e
3
+ size 438080896
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c727f9ce071f9cabed6cb8f038ad0ffdab9251f737b282750b93e0475fb66cc
3
+ size 5752
vocab.txt ADDED
The diff for this file is too large to render. See raw diff