DanSarm commited on
Commit
88928e4
·
verified ·
1 Parent(s): e1994ad

Fine-tuned Construction Receipt Model

Browse files
Files changed (4) hide show
  1. README.md +66 -68
  2. model.safetensors +1 -1
  3. tokenizer.json +3 -1
  4. training_args.bin +2 -2
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [DanSarm/receipt-core-model](https://huggingface.co/DanSarm/receipt-core-model) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.2738
20
 
21
  ## Model description
22
 
@@ -41,81 +41,79 @@ The following hyperparameters were used during training:
41
  - seed: 42
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
- - num_epochs: 1000
45
- - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 1.9458 | 1.0 | 25 | 0.6631 |
52
- | 0.6783 | 2.0 | 50 | 0.4201 |
53
- | 0.448 | 3.0 | 75 | 0.3256 |
54
- | 0.3453 | 4.0 | 100 | 0.2814 |
55
- | 0.2939 | 5.0 | 125 | 0.2581 |
56
- | 0.2389 | 6.0 | 150 | 0.2528 |
57
- | 0.2087 | 7.0 | 175 | 0.2462 |
58
- | 0.1769 | 8.0 | 200 | 0.2311 |
59
- | 0.1746 | 9.0 | 225 | 0.2286 |
60
- | 0.1488 | 10.0 | 250 | 0.2306 |
61
- | 0.1322 | 11.0 | 275 | 0.2275 |
62
- | 0.1219 | 12.0 | 300 | 0.2243 |
63
- | 0.1161 | 13.0 | 325 | 0.2069 |
64
- | 0.0984 | 14.0 | 350 | 0.2317 |
65
- | 0.0936 | 15.0 | 375 | 0.2312 |
66
- | 0.0891 | 16.0 | 400 | 0.2274 |
67
- | 0.0792 | 17.0 | 425 | 0.2311 |
68
- | 0.07 | 18.0 | 450 | 0.2399 |
69
- | 0.0666 | 19.0 | 475 | 0.2336 |
70
- | 0.0704 | 20.0 | 500 | 0.2349 |
71
- | 0.0644 | 21.0 | 525 | 0.2397 |
72
- | 0.0552 | 22.0 | 550 | 0.2434 |
73
- | 0.0517 | 23.0 | 575 | 0.2428 |
74
- | 0.0475 | 24.0 | 600 | 0.2462 |
75
- | 0.0453 | 25.0 | 625 | 0.2203 |
76
- | 0.0422 | 26.0 | 650 | 0.2264 |
77
- | 0.0395 | 27.0 | 675 | 0.2366 |
78
- | 0.0394 | 28.0 | 700 | 0.2393 |
79
- | 0.0361 | 29.0 | 725 | 0.2423 |
80
- | 0.0302 | 30.0 | 750 | 0.2480 |
81
- | 0.0317 | 31.0 | 775 | 0.2441 |
82
- | 0.0265 | 32.0 | 800 | 0.2519 |
83
- | 0.027 | 33.0 | 825 | 0.2541 |
84
- | 0.027 | 34.0 | 850 | 0.2512 |
85
- | 0.0266 | 35.0 | 875 | 0.2590 |
86
- | 0.0246 | 36.0 | 900 | 0.2319 |
87
- | 0.023 | 37.0 | 925 | 0.2419 |
88
- | 0.0195 | 38.0 | 950 | 0.2473 |
89
- | 0.0206 | 39.0 | 975 | 0.2471 |
90
- | 0.019 | 40.0 | 1000 | 0.2485 |
91
- | 0.0175 | 41.0 | 1025 | 0.2635 |
92
- | 0.0163 | 42.0 | 1050 | 0.2513 |
93
- | 0.0185 | 43.0 | 1075 | 0.2618 |
94
- | 0.0167 | 44.0 | 1100 | 0.2549 |
95
- | 0.0161 | 45.0 | 1125 | 0.2540 |
96
- | 0.0163 | 46.0 | 1150 | 0.2543 |
97
- | 0.0149 | 47.0 | 1175 | 0.2482 |
98
- | 0.016 | 48.0 | 1200 | 0.2487 |
99
- | 0.0134 | 49.0 | 1225 | 0.2572 |
100
- | 0.0136 | 50.0 | 1250 | 0.2589 |
101
- | 0.0141 | 51.0 | 1275 | 0.2512 |
102
- | 0.0108 | 52.0 | 1300 | 0.2565 |
103
- | 0.011 | 53.0 | 1325 | 0.2512 |
104
- | 0.0094 | 54.0 | 1350 | 0.2588 |
105
- | 0.0132 | 55.0 | 1375 | 0.2515 |
106
- | 0.0125 | 56.0 | 1400 | 0.2597 |
107
- | 0.0118 | 57.0 | 1425 | 0.2601 |
108
- | 0.0097 | 58.0 | 1450 | 0.2579 |
109
- | 0.0098 | 59.0 | 1475 | 0.2586 |
110
- | 0.0083 | 60.0 | 1500 | 0.2821 |
111
- | 0.0081 | 61.0 | 1525 | 0.2811 |
112
- | 0.0081 | 62.0 | 1550 | 0.2633 |
113
- | 0.0078 | 63.0 | 1575 | 0.2738 |
114
 
115
 
116
  ### Framework versions
117
 
118
  - Transformers 4.49.0
119
  - Pytorch 2.6.0+cu124
120
- - Datasets 3.3.1
121
- - Tokenizers 0.21.0
 
16
 
17
  This model is a fine-tuned version of [DanSarm/receipt-core-model](https://huggingface.co/DanSarm/receipt-core-model) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.2616
20
 
21
  ## Model description
22
 
 
41
  - seed: 42
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 500
 
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | 1.3079 | 1.0 | 44 | 0.4549 |
51
+ | 0.4772 | 2.0 | 88 | 0.3239 |
52
+ | 0.3391 | 3.0 | 132 | 0.2757 |
53
+ | 0.2673 | 4.0 | 176 | 0.2483 |
54
+ | 0.2231 | 5.0 | 220 | 0.2324 |
55
+ | 0.1909 | 6.0 | 264 | 0.2200 |
56
+ | 0.1688 | 7.0 | 308 | 0.2094 |
57
+ | 0.1511 | 8.0 | 352 | 0.2051 |
58
+ | 0.1343 | 9.0 | 396 | 0.2102 |
59
+ | 0.1248 | 10.0 | 440 | 0.1969 |
60
+ | 0.1129 | 11.0 | 484 | 0.2020 |
61
+ | 0.1042 | 12.0 | 528 | 0.1937 |
62
+ | 0.0953 | 13.0 | 572 | 0.2084 |
63
+ | 0.0871 | 14.0 | 616 | 0.2120 |
64
+ | 0.0879 | 15.0 | 660 | 0.2149 |
65
+ | 0.0789 | 16.0 | 704 | 0.2104 |
66
+ | 0.0771 | 17.0 | 748 | 0.2206 |
67
+ | 0.067 | 18.0 | 792 | 0.2162 |
68
+ | 0.0644 | 19.0 | 836 | 0.2176 |
69
+ | 0.0572 | 20.0 | 880 | 0.2225 |
70
+ | 0.0538 | 21.0 | 924 | 0.2258 |
71
+ | 0.0552 | 22.0 | 968 | 0.2223 |
72
+ | 0.0516 | 23.0 | 1012 | 0.2228 |
73
+ | 0.0444 | 24.0 | 1056 | 0.2273 |
74
+ | 0.0398 | 25.0 | 1100 | 0.2279 |
75
+ | 0.0388 | 26.0 | 1144 | 0.2264 |
76
+ | 0.0377 | 27.0 | 1188 | 0.2261 |
77
+ | 0.0344 | 28.0 | 1232 | 0.2305 |
78
+ | 0.0323 | 29.0 | 1276 | 0.2415 |
79
+ | 0.0296 | 30.0 | 1320 | 0.2364 |
80
+ | 0.0297 | 31.0 | 1364 | 0.2434 |
81
+ | 0.0268 | 32.0 | 1408 | 0.2391 |
82
+ | 0.0232 | 33.0 | 1452 | 0.2384 |
83
+ | 0.0226 | 34.0 | 1496 | 0.2370 |
84
+ | 0.022 | 35.0 | 1540 | 0.2401 |
85
+ | 0.0218 | 36.0 | 1584 | 0.2355 |
86
+ | 0.0222 | 37.0 | 1628 | 0.2384 |
87
+ | 0.0185 | 38.0 | 1672 | 0.2289 |
88
+ | 0.0169 | 39.0 | 1716 | 0.2419 |
89
+ | 0.0172 | 40.0 | 1760 | 0.2434 |
90
+ | 0.0149 | 41.0 | 1804 | 0.2515 |
91
+ | 0.0143 | 42.0 | 1848 | 0.2405 |
92
+ | 0.0133 | 43.0 | 1892 | 0.2493 |
93
+ | 0.0151 | 44.0 | 1936 | 0.2440 |
94
+ | 0.0117 | 45.0 | 1980 | 0.2458 |
95
+ | 0.011 | 46.0 | 2024 | 0.2501 |
96
+ | 0.01 | 47.0 | 2068 | 0.2546 |
97
+ | 0.0102 | 48.0 | 2112 | 0.2501 |
98
+ | 0.0099 | 49.0 | 2156 | 0.2542 |
99
+ | 0.01 | 50.0 | 2200 | 0.2647 |
100
+ | 0.0098 | 51.0 | 2244 | 0.2525 |
101
+ | 0.0105 | 52.0 | 2288 | 0.2569 |
102
+ | 0.0076 | 53.0 | 2332 | 0.2586 |
103
+ | 0.0087 | 54.0 | 2376 | 0.2648 |
104
+ | 0.0109 | 55.0 | 2420 | 0.2599 |
105
+ | 0.0087 | 56.0 | 2464 | 0.2537 |
106
+ | 0.0103 | 57.0 | 2508 | 0.2536 |
107
+ | 0.0075 | 58.0 | 2552 | 0.2607 |
108
+ | 0.0078 | 59.0 | 2596 | 0.2620 |
109
+ | 0.0055 | 60.0 | 2640 | 0.2629 |
110
+ | 0.0071 | 61.0 | 2684 | 0.2608 |
111
+ | 0.007 | 62.0 | 2728 | 0.2616 |
 
112
 
113
 
114
  ### Framework versions
115
 
116
  - Transformers 4.49.0
117
  - Pytorch 2.6.0+cu124
118
+ - Datasets 3.4.1
119
+ - Tokenizers 0.21.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:22194e837198408c893d67c727c0776bb8cac42c3eb2fe6486900c4e6b45987f
3
  size 891644712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d82b2a83cc59f2599b15fff94a0e29335ce941ea39c2c771940fb0ec28a0f25
3
  size 891644712
tokenizer.json CHANGED
@@ -7,7 +7,9 @@
7
  "stride": 0
8
  },
9
  "padding": {
10
- "strategy": "BatchLongest",
 
 
11
  "direction": "Right",
12
  "pad_to_multiple_of": null,
13
  "pad_id": 0,
 
7
  "stride": 0
8
  },
9
  "padding": {
10
+ "strategy": {
11
+ "Fixed": 128
12
+ },
13
  "direction": "Right",
14
  "pad_to_multiple_of": null,
15
  "pad_id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b78e866639e360525130aceb768ecc0b4d5663acea3fae8b817631cb40018b73
3
- size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f8ef3dc419004156b9b0e465ddaf08fc9a7f9aedf41e1a02ce69bf5b1b13603
3
+ size 5496