Fine-tuned Construction Receipt Model
Browse files- README.md +66 -68
- model.safetensors +1 -1
- tokenizer.json +3 -1
- training_args.bin +2 -2
README.md
CHANGED
|
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 16 |
|
| 17 |
This model is a fine-tuned version of [DanSarm/receipt-core-model](https://huggingface.co/DanSarm/receipt-core-model) on an unknown dataset.
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
-
- Loss: 0.
|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
|
@@ -41,81 +41,79 @@ The following hyperparameters were used during training:
|
|
| 41 |
- seed: 42
|
| 42 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 43 |
- lr_scheduler_type: linear
|
| 44 |
-
- num_epochs:
|
| 45 |
-
- mixed_precision_training: Native AMP
|
| 46 |
|
| 47 |
### Training results
|
| 48 |
|
| 49 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 50 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 51 |
-
| 1.
|
| 52 |
-
| 0.
|
| 53 |
-
| 0.
|
| 54 |
-
| 0.
|
| 55 |
-
| 0.
|
| 56 |
-
| 0.
|
| 57 |
-
| 0.
|
| 58 |
-
| 0.
|
| 59 |
-
| 0.
|
| 60 |
-
| 0.
|
| 61 |
-
| 0.
|
| 62 |
-
| 0.
|
| 63 |
-
| 0.
|
| 64 |
-
| 0.
|
| 65 |
-
| 0.
|
| 66 |
-
| 0.
|
| 67 |
-
| 0.
|
| 68 |
-
| 0.
|
| 69 |
-
| 0.
|
| 70 |
-
| 0.
|
| 71 |
-
| 0.
|
| 72 |
-
| 0.0552 | 22.0 |
|
| 73 |
-
| 0.
|
| 74 |
-
| 0.
|
| 75 |
-
| 0.
|
| 76 |
-
| 0.
|
| 77 |
-
| 0.
|
| 78 |
-
| 0.
|
| 79 |
-
| 0.
|
| 80 |
-
| 0.
|
| 81 |
-
| 0.
|
| 82 |
-
| 0.
|
| 83 |
-
| 0.
|
| 84 |
-
| 0.
|
| 85 |
-
| 0.
|
| 86 |
-
| 0.
|
| 87 |
-
| 0.
|
| 88 |
-
| 0.
|
| 89 |
-
| 0.
|
| 90 |
-
| 0.
|
| 91 |
-
| 0.
|
| 92 |
-
| 0.
|
| 93 |
-
| 0.
|
| 94 |
-
| 0.
|
| 95 |
-
| 0.
|
| 96 |
-
| 0.
|
| 97 |
-
| 0.
|
| 98 |
-
| 0.
|
| 99 |
-
| 0.
|
| 100 |
-
| 0.
|
| 101 |
-
| 0.
|
| 102 |
-
| 0.
|
| 103 |
-
| 0.
|
| 104 |
-
| 0.
|
| 105 |
-
| 0.
|
| 106 |
-
| 0.
|
| 107 |
-
| 0.
|
| 108 |
-
| 0.
|
| 109 |
-
| 0.
|
| 110 |
-
| 0.
|
| 111 |
-
| 0.
|
| 112 |
-
| 0.
|
| 113 |
-
| 0.0078 | 63.0 | 1575 | 0.2738 |
|
| 114 |
|
| 115 |
|
| 116 |
### Framework versions
|
| 117 |
|
| 118 |
- Transformers 4.49.0
|
| 119 |
- Pytorch 2.6.0+cu124
|
| 120 |
-
- Datasets 3.
|
| 121 |
-
- Tokenizers 0.21.
|
|
|
|
| 16 |
|
| 17 |
This model is a fine-tuned version of [DanSarm/receipt-core-model](https://huggingface.co/DanSarm/receipt-core-model) on an unknown dataset.
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
+
- Loss: 0.2616
|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
|
|
|
| 41 |
- seed: 42
|
| 42 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 43 |
- lr_scheduler_type: linear
|
| 44 |
+
- num_epochs: 500
|
|
|
|
| 45 |
|
| 46 |
### Training results
|
| 47 |
|
| 48 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 49 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 50 |
+
| 1.3079 | 1.0 | 44 | 0.4549 |
|
| 51 |
+
| 0.4772 | 2.0 | 88 | 0.3239 |
|
| 52 |
+
| 0.3391 | 3.0 | 132 | 0.2757 |
|
| 53 |
+
| 0.2673 | 4.0 | 176 | 0.2483 |
|
| 54 |
+
| 0.2231 | 5.0 | 220 | 0.2324 |
|
| 55 |
+
| 0.1909 | 6.0 | 264 | 0.2200 |
|
| 56 |
+
| 0.1688 | 7.0 | 308 | 0.2094 |
|
| 57 |
+
| 0.1511 | 8.0 | 352 | 0.2051 |
|
| 58 |
+
| 0.1343 | 9.0 | 396 | 0.2102 |
|
| 59 |
+
| 0.1248 | 10.0 | 440 | 0.1969 |
|
| 60 |
+
| 0.1129 | 11.0 | 484 | 0.2020 |
|
| 61 |
+
| 0.1042 | 12.0 | 528 | 0.1937 |
|
| 62 |
+
| 0.0953 | 13.0 | 572 | 0.2084 |
|
| 63 |
+
| 0.0871 | 14.0 | 616 | 0.2120 |
|
| 64 |
+
| 0.0879 | 15.0 | 660 | 0.2149 |
|
| 65 |
+
| 0.0789 | 16.0 | 704 | 0.2104 |
|
| 66 |
+
| 0.0771 | 17.0 | 748 | 0.2206 |
|
| 67 |
+
| 0.067 | 18.0 | 792 | 0.2162 |
|
| 68 |
+
| 0.0644 | 19.0 | 836 | 0.2176 |
|
| 69 |
+
| 0.0572 | 20.0 | 880 | 0.2225 |
|
| 70 |
+
| 0.0538 | 21.0 | 924 | 0.2258 |
|
| 71 |
+
| 0.0552 | 22.0 | 968 | 0.2223 |
|
| 72 |
+
| 0.0516 | 23.0 | 1012 | 0.2228 |
|
| 73 |
+
| 0.0444 | 24.0 | 1056 | 0.2273 |
|
| 74 |
+
| 0.0398 | 25.0 | 1100 | 0.2279 |
|
| 75 |
+
| 0.0388 | 26.0 | 1144 | 0.2264 |
|
| 76 |
+
| 0.0377 | 27.0 | 1188 | 0.2261 |
|
| 77 |
+
| 0.0344 | 28.0 | 1232 | 0.2305 |
|
| 78 |
+
| 0.0323 | 29.0 | 1276 | 0.2415 |
|
| 79 |
+
| 0.0296 | 30.0 | 1320 | 0.2364 |
|
| 80 |
+
| 0.0297 | 31.0 | 1364 | 0.2434 |
|
| 81 |
+
| 0.0268 | 32.0 | 1408 | 0.2391 |
|
| 82 |
+
| 0.0232 | 33.0 | 1452 | 0.2384 |
|
| 83 |
+
| 0.0226 | 34.0 | 1496 | 0.2370 |
|
| 84 |
+
| 0.022 | 35.0 | 1540 | 0.2401 |
|
| 85 |
+
| 0.0218 | 36.0 | 1584 | 0.2355 |
|
| 86 |
+
| 0.0222 | 37.0 | 1628 | 0.2384 |
|
| 87 |
+
| 0.0185 | 38.0 | 1672 | 0.2289 |
|
| 88 |
+
| 0.0169 | 39.0 | 1716 | 0.2419 |
|
| 89 |
+
| 0.0172 | 40.0 | 1760 | 0.2434 |
|
| 90 |
+
| 0.0149 | 41.0 | 1804 | 0.2515 |
|
| 91 |
+
| 0.0143 | 42.0 | 1848 | 0.2405 |
|
| 92 |
+
| 0.0133 | 43.0 | 1892 | 0.2493 |
|
| 93 |
+
| 0.0151 | 44.0 | 1936 | 0.2440 |
|
| 94 |
+
| 0.0117 | 45.0 | 1980 | 0.2458 |
|
| 95 |
+
| 0.011 | 46.0 | 2024 | 0.2501 |
|
| 96 |
+
| 0.01 | 47.0 | 2068 | 0.2546 |
|
| 97 |
+
| 0.0102 | 48.0 | 2112 | 0.2501 |
|
| 98 |
+
| 0.0099 | 49.0 | 2156 | 0.2542 |
|
| 99 |
+
| 0.01 | 50.0 | 2200 | 0.2647 |
|
| 100 |
+
| 0.0098 | 51.0 | 2244 | 0.2525 |
|
| 101 |
+
| 0.0105 | 52.0 | 2288 | 0.2569 |
|
| 102 |
+
| 0.0076 | 53.0 | 2332 | 0.2586 |
|
| 103 |
+
| 0.0087 | 54.0 | 2376 | 0.2648 |
|
| 104 |
+
| 0.0109 | 55.0 | 2420 | 0.2599 |
|
| 105 |
+
| 0.0087 | 56.0 | 2464 | 0.2537 |
|
| 106 |
+
| 0.0103 | 57.0 | 2508 | 0.2536 |
|
| 107 |
+
| 0.0075 | 58.0 | 2552 | 0.2607 |
|
| 108 |
+
| 0.0078 | 59.0 | 2596 | 0.2620 |
|
| 109 |
+
| 0.0055 | 60.0 | 2640 | 0.2629 |
|
| 110 |
+
| 0.0071 | 61.0 | 2684 | 0.2608 |
|
| 111 |
+
| 0.007 | 62.0 | 2728 | 0.2616 |
|
|
|
|
| 112 |
|
| 113 |
|
| 114 |
### Framework versions
|
| 115 |
|
| 116 |
- Transformers 4.49.0
|
| 117 |
- Pytorch 2.6.0+cu124
|
| 118 |
+
- Datasets 3.4.1
|
| 119 |
+
- Tokenizers 0.21.1
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 891644712
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8d82b2a83cc59f2599b15fff94a0e29335ce941ea39c2c771940fb0ec28a0f25
|
| 3 |
size 891644712
|
tokenizer.json
CHANGED
|
@@ -7,7 +7,9 @@
|
|
| 7 |
"stride": 0
|
| 8 |
},
|
| 9 |
"padding": {
|
| 10 |
-
"strategy":
|
|
|
|
|
|
|
| 11 |
"direction": "Right",
|
| 12 |
"pad_to_multiple_of": null,
|
| 13 |
"pad_id": 0,
|
|
|
|
| 7 |
"stride": 0
|
| 8 |
},
|
| 9 |
"padding": {
|
| 10 |
+
"strategy": {
|
| 11 |
+
"Fixed": 128
|
| 12 |
+
},
|
| 13 |
"direction": "Right",
|
| 14 |
"pad_to_multiple_of": null,
|
| 15 |
"pad_id": 0,
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f8ef3dc419004156b9b0e465ddaf08fc9a7f9aedf41e1a02ce69bf5b1b13603
|
| 3 |
+
size 5496
|