DayCardoso commited on
Commit
b8d1607
·
verified ·
1 Parent(s): 83b6f9f

Model save

Browse files
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google-bert/bert-base-uncased
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: bert-seq-class-values-no-context
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # bert-seq-class-values-no-context
16
+
17
+ This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.3960
20
+ - Subset Accuracy: 0.3047
21
+ - F1 Macro: 0.3430
22
+ - F1 Micro: 0.4073
23
+ - Precision Macro: 0.3609
24
+ - Recall Macro: 0.3304
25
+ - Roc Auc: 0.7914
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 5e-05
45
+ - train_batch_size: 4
46
+ - eval_batch_size: 4
47
+ - seed: 2025
48
+ - gradient_accumulation_steps: 4
49
+ - total_train_batch_size: 16
50
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_ratio: 0.1
53
+ - num_epochs: 20
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Subset Accuracy | F1 Macro | F1 Micro | Precision Macro | Recall Macro | Roc Auc |
59
+ |:-------------:|:-------:|:-----:|:---------------:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:-------:|
60
+ | 0.4117 | 0.5002 | 767 | 0.2112 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6381 |
61
+ | 0.1905 | 1.0 | 1534 | 0.1792 | 0.0696 | 0.0475 | 0.1275 | 0.1803 | 0.0303 | 0.7750 |
62
+ | 0.1716 | 1.5002 | 2301 | 0.1687 | 0.1532 | 0.1252 | 0.2519 | 0.3027 | 0.0962 | 0.8048 |
63
+ | 0.1619 | 2.0 | 3068 | 0.1644 | 0.1989 | 0.1771 | 0.3019 | 0.3561 | 0.1341 | 0.8267 |
64
+ | 0.1395 | 2.5002 | 3835 | 0.1669 | 0.2553 | 0.2407 | 0.3649 | 0.4672 | 0.1917 | 0.8283 |
65
+ | 0.1334 | 3.0 | 4602 | 0.1634 | 0.2505 | 0.2582 | 0.3545 | 0.4557 | 0.2006 | 0.8352 |
66
+ | 0.1032 | 3.5002 | 5369 | 0.1803 | 0.3041 | 0.3093 | 0.3938 | 0.4053 | 0.2615 | 0.8262 |
67
+ | 0.0958 | 4.0 | 6136 | 0.1826 | 0.3129 | 0.3013 | 0.3989 | 0.4117 | 0.2628 | 0.8277 |
68
+ | 0.0733 | 4.5002 | 6903 | 0.2074 | 0.3161 | 0.3140 | 0.4001 | 0.3970 | 0.2831 | 0.8149 |
69
+ | 0.0655 | 5.0 | 7670 | 0.2097 | 0.3098 | 0.3195 | 0.3967 | 0.3947 | 0.2824 | 0.8152 |
70
+ | 0.0536 | 5.5002 | 8437 | 0.2252 | 0.3046 | 0.3281 | 0.3961 | 0.3764 | 0.3032 | 0.8099 |
71
+ | 0.0462 | 6.0 | 9204 | 0.2318 | 0.3045 | 0.3260 | 0.3931 | 0.3620 | 0.3040 | 0.8071 |
72
+ | 0.0398 | 6.5002 | 9971 | 0.2453 | 0.3076 | 0.3266 | 0.3892 | 0.3845 | 0.2963 | 0.8074 |
73
+ | 0.0345 | 7.0 | 10738 | 0.2548 | 0.2984 | 0.3262 | 0.3891 | 0.3547 | 0.3115 | 0.8027 |
74
+ | 0.0249 | 7.5002 | 11505 | 0.2640 | 0.2958 | 0.3350 | 0.3964 | 0.3611 | 0.3235 | 0.7998 |
75
+ | 0.0251 | 8.0 | 12272 | 0.2687 | 0.3071 | 0.3281 | 0.4027 | 0.3745 | 0.3050 | 0.7992 |
76
+ | 0.0183 | 8.5002 | 13039 | 0.2850 | 0.2839 | 0.3276 | 0.3810 | 0.3673 | 0.3192 | 0.8005 |
77
+ | 0.019 | 9.0 | 13806 | 0.2879 | 0.3023 | 0.3316 | 0.3910 | 0.3708 | 0.3093 | 0.7972 |
78
+ | 0.0141 | 9.5002 | 14573 | 0.3012 | 0.2963 | 0.3357 | 0.4016 | 0.3442 | 0.3339 | 0.7961 |
79
+ | 0.0136 | 10.0 | 15340 | 0.3050 | 0.3053 | 0.3329 | 0.4001 | 0.3667 | 0.3111 | 0.7969 |
80
+ | 0.0098 | 10.5002 | 16107 | 0.3157 | 0.2940 | 0.3387 | 0.4071 | 0.3415 | 0.3431 | 0.7964 |
81
+ | 0.0099 | 11.0 | 16874 | 0.3252 | 0.2855 | 0.3409 | 0.4005 | 0.3394 | 0.3500 | 0.7948 |
82
+ | 0.0072 | 11.5002 | 17641 | 0.3294 | 0.2874 | 0.3371 | 0.3998 | 0.3582 | 0.3333 | 0.7977 |
83
+ | 0.0071 | 12.0 | 18408 | 0.3379 | 0.2931 | 0.3351 | 0.3931 | 0.3699 | 0.3146 | 0.7926 |
84
+ | 0.0051 | 12.5002 | 19175 | 0.3494 | 0.2919 | 0.3308 | 0.3975 | 0.3600 | 0.3168 | 0.7926 |
85
+ | 0.0047 | 13.0 | 19942 | 0.3546 | 0.2888 | 0.3387 | 0.3925 | 0.3604 | 0.3240 | 0.7911 |
86
+ | 0.0039 | 13.5002 | 20709 | 0.3598 | 0.2977 | 0.3415 | 0.4025 | 0.3681 | 0.3291 | 0.7955 |
87
+ | 0.0036 | 14.0 | 21476 | 0.3600 | 0.2993 | 0.3419 | 0.4061 | 0.3644 | 0.3282 | 0.7902 |
88
+ | 0.0025 | 14.5002 | 22243 | 0.3717 | 0.3023 | 0.3465 | 0.4098 | 0.3655 | 0.3327 | 0.7904 |
89
+ | 0.003 | 15.0 | 23010 | 0.3783 | 0.3030 | 0.3373 | 0.3982 | 0.3687 | 0.3141 | 0.7914 |
90
+ | 0.002 | 15.5002 | 23777 | 0.3835 | 0.3011 | 0.3317 | 0.3985 | 0.3687 | 0.3089 | 0.7906 |
91
+ | 0.0016 | 16.0 | 24544 | 0.3909 | 0.3099 | 0.3430 | 0.4099 | 0.3712 | 0.3232 | 0.7894 |
92
+ | 0.0016 | 16.5002 | 25311 | 0.3900 | 0.2987 | 0.3449 | 0.4073 | 0.3616 | 0.3352 | 0.7935 |
93
+ | 0.0013 | 17.0 | 26078 | 0.3960 | 0.3047 | 0.3430 | 0.4073 | 0.3609 | 0.3304 | 0.7914 |
94
+
95
+
96
+ ### Framework versions
97
+
98
+ - Transformers 4.53.2
99
+ - Pytorch 2.6.0+cu124
100
+ - Datasets 2.14.4
101
+ - Tokenizers 0.21.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1661347469b84c96df167eb01dda77538e3637a0f9b09cf5772a18fc3d01a10e
3
  size 438010940
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c58be097aabc8cbfe85b997d7ba59c7976424e5270b9af30bbc439dc1b9cf08
3
  size 438010940
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff