OCR-Model-v3 / README.md
nifras365's picture
nifras365/ocr-scanner-v2
f590153 verified
metadata
library_name: transformers
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
model-index:
  - name: layoutlm-receipts
    results: []

layoutlm-receipts

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0896
  • Precision: 0.75
  • Recall: 0.75
  • F1: 0.75

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1
0.1758 1.0 8 0.1713 0.3529 0.6 0.4444
0.1438 2.0 16 0.2149 0.1111 0.15 0.1277
0.0494 3.0 24 0.2381 0.36 0.45 0.4000
0.042 4.0 32 0.1144 0.5455 0.6 0.5714
0.0236 5.0 40 0.0788 0.7 0.7 0.7
0.0111 6.0 48 0.0804 0.8333 0.75 0.7895
0.0114 7.0 56 0.0964 0.6667 0.7 0.6829
0.0031 8.0 64 0.0892 0.8333 0.75 0.7895
0.0065 9.0 72 0.1038 0.75 0.75 0.75
0.0014 10.0 80 0.1093 0.75 0.75 0.75
0.0045 11.0 88 0.0998 0.75 0.75 0.75
0.0027 12.0 96 0.0738 0.9444 0.85 0.8947
0.0008 13.0 104 0.0745 0.9444 0.85 0.8947
0.0029 14.0 112 0.1234 0.5833 0.7 0.6364
0.004 15.0 120 0.0865 0.6364 0.7 0.6667
0.0007 16.0 128 0.0888 0.8333 0.75 0.7895
0.0055 17.0 136 0.0934 0.75 0.75 0.75
0.0004 18.0 144 0.0854 0.8333 0.75 0.7895
0.0004 19.0 152 0.0846 0.8333 0.75 0.7895
0.0005 20.0 160 0.0843 0.8333 0.75 0.7895
0.0005 21.0 168 0.0852 0.8333 0.75 0.7895
0.0004 22.0 176 0.0862 0.8333 0.75 0.7895
0.0005 23.0 184 0.0875 0.8333 0.75 0.7895
0.0003 24.0 192 0.0892 0.75 0.75 0.75
0.0005 25.0 200 0.0896 0.75 0.75 0.75

Framework versions

  • Transformers 4.56.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.0