tom-010 commited on
Commit
f28df5e
·
verified ·
1 Parent(s): 7f52dbe

Model save

Browse files
Files changed (2) hide show
  1. README.md +103 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: microsoft/deberta-v3-base
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - precision
10
+ - recall
11
+ - f1
12
+ model-index:
13
+ - name: judge_answer___29_deberta_v3_base_msmarco_answerability
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # judge_answer___29_deberta_v3_base_msmarco_answerability
21
+
22
+ This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.4194
25
+ - Accuracy: 0.8164
26
+ - Precision: 0.7814
27
+ - Recall: 0.8815
28
+ - F1: 0.8284
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 3e-05
48
+ - train_batch_size: 16
49
+ - eval_batch_size: 8
50
+ - seed: 42
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 1
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
+ |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
+ | 0.5008 | 0.0272 | 2000 | 0.4931 | 0.7864 | 0.7498 | 0.8632 | 0.8025 |
61
+ | 0.4832 | 0.0544 | 4000 | 0.4565 | 0.7858 | 0.7422 | 0.8795 | 0.8050 |
62
+ | 0.4716 | 0.0816 | 6000 | 0.4758 | 0.7926 | 0.7527 | 0.8751 | 0.8093 |
63
+ | 0.4645 | 0.1088 | 8000 | 0.4740 | 0.7878 | 0.7633 | 0.8377 | 0.7988 |
64
+ | 0.4697 | 0.1360 | 10000 | 0.4519 | 0.7982 | 0.7720 | 0.8496 | 0.8089 |
65
+ | 0.4729 | 0.1632 | 12000 | 0.4471 | 0.7946 | 0.7664 | 0.8508 | 0.8064 |
66
+ | 0.4589 | 0.1904 | 14000 | 0.4455 | 0.8002 | 0.7661 | 0.8675 | 0.8137 |
67
+ | 0.4513 | 0.2176 | 16000 | 0.4726 | 0.7934 | 0.7472 | 0.8902 | 0.8125 |
68
+ | 0.4573 | 0.2448 | 18000 | 0.4357 | 0.8016 | 0.7775 | 0.8481 | 0.8113 |
69
+ | 0.4474 | 0.2720 | 20000 | 0.4738 | 0.7932 | 0.7503 | 0.8823 | 0.8110 |
70
+ | 0.448 | 0.2992 | 22000 | 0.4360 | 0.7934 | 0.7940 | 0.7955 | 0.7948 |
71
+ | 0.449 | 0.3264 | 24000 | 0.4464 | 0.7996 | 0.7708 | 0.8560 | 0.8112 |
72
+ | 0.449 | 0.3536 | 26000 | 0.4467 | 0.8048 | 0.7655 | 0.8819 | 0.8196 |
73
+ | 0.4483 | 0.3808 | 28000 | 0.4459 | 0.8042 | 0.7603 | 0.8918 | 0.8208 |
74
+ | 0.4468 | 0.4080 | 30000 | 0.4400 | 0.8054 | 0.7898 | 0.8353 | 0.8119 |
75
+ | 0.4413 | 0.4352 | 32000 | 0.4321 | 0.8048 | 0.7917 | 0.8302 | 0.8105 |
76
+ | 0.4444 | 0.4624 | 34000 | 0.4309 | 0.8086 | 0.7691 | 0.8850 | 0.8230 |
77
+ | 0.4507 | 0.4896 | 36000 | 0.4301 | 0.8124 | 0.7945 | 0.8457 | 0.8193 |
78
+ | 0.4426 | 0.5168 | 38000 | 0.4243 | 0.8052 | 0.7698 | 0.8739 | 0.8186 |
79
+ | 0.4321 | 0.5440 | 40000 | 0.4243 | 0.8074 | 0.7681 | 0.8839 | 0.8219 |
80
+ | 0.4301 | 0.5712 | 42000 | 0.4380 | 0.806 | 0.7640 | 0.8886 | 0.8216 |
81
+ | 0.4418 | 0.5984 | 44000 | 0.4280 | 0.8096 | 0.7857 | 0.8544 | 0.8186 |
82
+ | 0.4334 | 0.6256 | 46000 | 0.4326 | 0.809 | 0.7765 | 0.8707 | 0.8209 |
83
+ | 0.4385 | 0.6528 | 48000 | 0.4273 | 0.8116 | 0.7844 | 0.8624 | 0.8215 |
84
+ | 0.4337 | 0.6800 | 50000 | 0.4306 | 0.8086 | 0.7795 | 0.8636 | 0.8194 |
85
+ | 0.4294 | 0.7072 | 52000 | 0.4397 | 0.811 | 0.7706 | 0.8886 | 0.8254 |
86
+ | 0.4276 | 0.7344 | 54000 | 0.4344 | 0.8138 | 0.7770 | 0.8831 | 0.8267 |
87
+ | 0.4183 | 0.7616 | 56000 | 0.4291 | 0.812 | 0.7650 | 0.9037 | 0.8286 |
88
+ | 0.4226 | 0.7888 | 58000 | 0.4342 | 0.8134 | 0.7767 | 0.8827 | 0.8263 |
89
+ | 0.4266 | 0.8160 | 60000 | 0.4234 | 0.8132 | 0.7840 | 0.8675 | 0.8236 |
90
+ | 0.4285 | 0.8432 | 62000 | 0.4167 | 0.8156 | 0.7882 | 0.8660 | 0.8252 |
91
+ | 0.4265 | 0.8704 | 64000 | 0.4206 | 0.8142 | 0.7734 | 0.8918 | 0.8284 |
92
+ | 0.429 | 0.8976 | 66000 | 0.4165 | 0.8174 | 0.7910 | 0.8656 | 0.8266 |
93
+ | 0.4308 | 0.9248 | 68000 | 0.4192 | 0.814 | 0.7775 | 0.8827 | 0.8268 |
94
+ | 0.4248 | 0.9520 | 70000 | 0.4205 | 0.8152 | 0.7807 | 0.8795 | 0.8272 |
95
+ | 0.425 | 0.9792 | 72000 | 0.4194 | 0.8164 | 0.7814 | 0.8815 | 0.8284 |
96
+
97
+
98
+ ### Framework versions
99
+
100
+ - Transformers 4.45.2
101
+ - Pytorch 2.4.1+cu124
102
+ - Datasets 3.0.1
103
+ - Tokenizers 0.20.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a0ae9d93c3356616b40a0b046e197dbbe9adf31af8ca3ededbe5b13456434524
3
  size 737719272
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be3ae366ec9da21381d1d70bf1fd8a09fdfac0e302dd543add17778673612bdb
3
  size 737719272