Kamer commited on
Commit
3ef1bb2
·
1 Parent(s): 65c2e49

End of training

Browse files
Files changed (2) hide show
  1. README.md +86 -0
  2. pytorch_model.bin +1 -1
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: NoDuplicates
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # NoDuplicates
17
+
18
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.3935
21
+ - Accuracy: 0.9173
22
+ - F1 Macro: 0.8603
23
+ - F1 Class 0: 0.9418
24
+ - F1 Class 1: 0.5714
25
+ - F1 Class 2: 0.9176
26
+ - F1 Class 3: 0.7333
27
+ - F1 Class 4: 0.8232
28
+ - F1 Class 5: 0.7805
29
+ - F1 Class 6: 0.8866
30
+ - F1 Class 7: 0.7606
31
+ - F1 Class 8: 0.7500
32
+ - F1 Class 9: 0.9892
33
+ - F1 Class 10: 0.9074
34
+ - F1 Class 11: 0.9600
35
+ - F1 Class 12: 0.9240
36
+ - F1 Class 13: 0.9388
37
+ - F1 Class 14: 0.8837
38
+ - F1 Class 15: 0.8352
39
+ - F1 Class 16: 0.8511
40
+ - F1 Class 17: 0.9822
41
+ - F1 Class 18: 0.9583
42
+ - F1 Class 19: 0.8108
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 5e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 3.0
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Class 0 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | F1 Class 5 | F1 Class 6 | F1 Class 7 | F1 Class 8 | F1 Class 9 | F1 Class 10 | F1 Class 11 | F1 Class 12 | F1 Class 13 | F1 Class 14 | F1 Class 15 | F1 Class 16 | F1 Class 17 | F1 Class 18 | F1 Class 19 |
72
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|
73
+ | 1.193 | 0.44 | 500 | 0.5958 | 0.8531 | 0.6517 | 0.8756 | 0.0 | 0.8770 | 0.0 | 0.7245 | 0.6667 | 0.8094 | 0.0 | 0.0 | 0.9862 | 0.7629 | 0.9600 | 0.8840 | 0.8936 | 0.5970 | 0.7089 | 0.7045 | 0.9455 | 0.9556 | 0.6829 |
74
+ | 0.5371 | 0.88 | 1000 | 0.4463 | 0.8863 | 0.7428 | 0.9138 | 0.0 | 0.8810 | 0.3478 | 0.7988 | 0.8293 | 0.8641 | 0.6154 | 0.0 | 0.9862 | 0.85 | 0.9600 | 0.9010 | 0.9167 | 0.8052 | 0.7253 | 0.7727 | 0.9643 | 0.92 | 0.8039 |
75
+ | 0.3813 | 1.33 | 1500 | 0.4335 | 0.8942 | 0.7760 | 0.9209 | 0.3333 | 0.8978 | 0.56 | 0.7771 | 0.7805 | 0.8389 | 0.5517 | 0.0 | 0.9877 | 0.9159 | 0.9600 | 0.9184 | 0.9388 | 0.8163 | 0.7551 | 0.8155 | 0.9765 | 0.9574 | 0.8190 |
76
+ | 0.3434 | 1.77 | 2000 | 0.4025 | 0.9040 | 0.8490 | 0.9354 | 0.5714 | 0.8939 | 0.8125 | 0.7829 | 0.8 | 0.8638 | 0.7463 | 0.7500 | 0.9877 | 0.8276 | 0.9600 | 0.9165 | 0.9388 | 0.7568 | 0.8276 | 0.8317 | 0.9759 | 0.9583 | 0.8431 |
77
+ | 0.2959 | 2.21 | 2500 | 0.4101 | 0.9080 | 0.8560 | 0.9329 | 0.5714 | 0.9077 | 0.8387 | 0.7905 | 0.8 | 0.8581 | 0.6984 | 0.7500 | 0.9892 | 0.9020 | 0.9600 | 0.9163 | 0.9388 | 0.8667 | 0.8511 | 0.8261 | 0.9759 | 0.9684 | 0.7778 |
78
+ | 0.1972 | 2.65 | 3000 | 0.3935 | 0.9173 | 0.8603 | 0.9418 | 0.5714 | 0.9176 | 0.7333 | 0.8232 | 0.7805 | 0.8866 | 0.7606 | 0.7500 | 0.9892 | 0.9074 | 0.9600 | 0.9240 | 0.9388 | 0.8837 | 0.8352 | 0.8511 | 0.9822 | 0.9583 | 0.8108 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.32.0
84
+ - Pytorch 2.0.1+cu117
85
+ - Datasets 2.14.4
86
+ - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c9d13593fef16a323e590ab80c259ded77007bece6dc29ee9be0dc925080d5dc
3
  size 267910893
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29fb0509f3249908777147ae0cd97a54ee511735bffd428ab0fc4e37becc7916
3
  size 267910893