javicorvi commited on
Commit
ac8081f
·
1 Parent(s): d8199f7

javicorvi/pretoxtm-ner

Browse files
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: dmis-lab/biobert-v1.1
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: pretoxtm-ner
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # pretoxtm-ner
14
+
15
+ This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.2356
18
+ - Study Test: {'precision': 0.8107098381070984, 'recall': 0.9054242002781642, 'f1': 0.8554533508541393, 'number': 719}
19
+ - Manifestation: {'precision': 0.8428571428571429, 'recall': 0.8966565349544073, 'f1': 0.8689248895434463, 'number': 329}
20
+ - Finding: {'precision': 0.7924263674614306, 'recall': 0.7907627711686495, 'f1': 0.7915936952714535, 'number': 1429}
21
+ - Specimen: {'precision': 0.7935064935064935, 'recall': 0.8427586206896551, 'f1': 0.817391304347826, 'number': 725}
22
+ - Dose: {'precision': 0.8894472361809045, 'recall': 0.9315789473684211, 'f1': 0.910025706940874, 'number': 570}
23
+ - Dose Qualification: {'precision': 0.75, 'recall': 0.7368421052631579, 'f1': 0.743362831858407, 'number': 57}
24
+ - Sex: {'precision': 0.9282296650717703, 'recall': 0.9603960396039604, 'f1': 0.9440389294403893, 'number': 202}
25
+ - Group: {'precision': 0.6992481203007519, 'recall': 0.8303571428571429, 'f1': 0.7591836734693878, 'number': 112}
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 5e-05
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 3
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Study Test | Manifestation | Finding | Specimen | Dose | Dose Qualification | Sex | Group |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|
56
+ | No log | 1.0 | 257 | 0.2315 | {'precision': 0.7803680981595092, 'recall': 0.8845618915159944, 'f1': 0.8292046936114733, 'number': 719} | {'precision': 0.8481375358166189, 'recall': 0.8996960486322189, 'f1': 0.8731563421828908, 'number': 329} | {'precision': 0.8012618296529969, 'recall': 0.7109867039888034, 'f1': 0.7534297367445311, 'number': 1429} | {'precision': 0.7451456310679612, 'recall': 0.8468965517241379, 'f1': 0.7927695287282118, 'number': 725} | {'precision': 0.9078498293515358, 'recall': 0.9333333333333333, 'f1': 0.9204152249134948, 'number': 570} | {'precision': 0.703125, 'recall': 0.7894736842105263, 'f1': 0.743801652892562, 'number': 57} | {'precision': 0.9241706161137441, 'recall': 0.9653465346534653, 'f1': 0.9443099273607748, 'number': 202} | {'precision': 0.625, 'recall': 0.8928571428571429, 'f1': 0.7352941176470589, 'number': 112} |
57
+ | 0.0608 | 2.0 | 514 | 0.2285 | {'precision': 0.803680981595092, 'recall': 0.9109874826147427, 'f1': 0.8539765319426338, 'number': 719} | {'precision': 0.846820809248555, 'recall': 0.8905775075987842, 'f1': 0.8681481481481481, 'number': 329} | {'precision': 0.7780040733197556, 'recall': 0.8019594121763471, 'f1': 0.7898001378359754, 'number': 1429} | {'precision': 0.8052631578947368, 'recall': 0.8441379310344828, 'f1': 0.8242424242424241, 'number': 725} | {'precision': 0.9047619047619048, 'recall': 0.9333333333333333, 'f1': 0.9188255613126078, 'number': 570} | {'precision': 0.7368421052631579, 'recall': 0.7368421052631579, 'f1': 0.7368421052631579, 'number': 57} | {'precision': 0.9289099526066351, 'recall': 0.9702970297029703, 'f1': 0.9491525423728814, 'number': 202} | {'precision': 0.6783216783216783, 'recall': 0.8660714285714286, 'f1': 0.7607843137254903, 'number': 112} |
58
+ | 0.0608 | 3.0 | 771 | 0.2356 | {'precision': 0.8107098381070984, 'recall': 0.9054242002781642, 'f1': 0.8554533508541393, 'number': 719} | {'precision': 0.8428571428571429, 'recall': 0.8966565349544073, 'f1': 0.8689248895434463, 'number': 329} | {'precision': 0.7924263674614306, 'recall': 0.7907627711686495, 'f1': 0.7915936952714535, 'number': 1429} | {'precision': 0.7935064935064935, 'recall': 0.8427586206896551, 'f1': 0.817391304347826, 'number': 725} | {'precision': 0.8894472361809045, 'recall': 0.9315789473684211, 'f1': 0.910025706940874, 'number': 570} | {'precision': 0.75, 'recall': 0.7368421052631579, 'f1': 0.743362831858407, 'number': 57} | {'precision': 0.9282296650717703, 'recall': 0.9603960396039604, 'f1': 0.9440389294403893, 'number': 202} | {'precision': 0.6992481203007519, 'recall': 0.8303571428571429, 'f1': 0.7591836734693878, 'number': 112} |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - Transformers 4.33.3
64
+ - Pytorch 2.0.1+cu118
65
+ - Datasets 2.14.5
66
+ - Tokenizers 0.13.3
config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "dmis-lab/biobert-v1.1",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "O",
14
+ "1": "B-MANIFESTATION_FINDING",
15
+ "2": "B-STUDY_TESTCD",
16
+ "3": "B-DOSE",
17
+ "4": "I-DOSE",
18
+ "5": "B-SEX",
19
+ "6": "I-STUDY_TESTCD",
20
+ "7": "I-SEX",
21
+ "8": "B-FINDING",
22
+ "9": "I-FINDING",
23
+ "10": "B-SPECIMEN",
24
+ "11": "I-SPECIMEN",
25
+ "12": "B-GROUP",
26
+ "13": "I-GROUP",
27
+ "14": "I-MANIFESTATION_FINDING",
28
+ "15": "B-DOSE_QUALIFICATION",
29
+ "16": "I-DOSE_QUALIFICATION"
30
+ },
31
+ "initializer_range": 0.02,
32
+ "intermediate_size": 3072,
33
+ "label2id": {
34
+ "B-DOSE": 3,
35
+ "B-DOSE_QUALIFICATION": 15,
36
+ "B-FINDING": 8,
37
+ "B-GROUP": 12,
38
+ "B-MANIFESTATION_FINDING": 1,
39
+ "B-SEX": 5,
40
+ "B-SPECIMEN": 10,
41
+ "B-STUDY_TESTCD": 2,
42
+ "I-DOSE": 4,
43
+ "I-DOSE_QUALIFICATION": 16,
44
+ "I-FINDING": 9,
45
+ "I-GROUP": 13,
46
+ "I-MANIFESTATION_FINDING": 14,
47
+ "I-SEX": 7,
48
+ "I-SPECIMEN": 11,
49
+ "I-STUDY_TESTCD": 6,
50
+ "O": 0
51
+ },
52
+ "layer_norm_eps": 1e-12,
53
+ "max_position_embeddings": 512,
54
+ "model_type": "bert",
55
+ "num_attention_heads": 12,
56
+ "num_hidden_layers": 12,
57
+ "pad_token_id": 0,
58
+ "position_embedding_type": "absolute",
59
+ "torch_dtype": "float32",
60
+ "transformers_version": "4.33.3",
61
+ "type_vocab_size": 2,
62
+ "use_cache": true,
63
+ "vocab_size": 28996
64
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e37bbed46e64bde07f5869f1235fccde65ab84cbcafbfd9e49a9ad6acbcb8e7e
3
+ size 430998761
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": false,
6
+ "mask_token": "[MASK]",
7
+ "model_max_length": 512,
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0107f0f369143ef3e6969cd01fdcbfc6dc3da1db43f8dac2c4b0c57b869e6932
3
+ size 4027
vocab.txt ADDED
The diff for this file is too large to render. See raw diff