layoutlm-funsd

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6945
  • Answer: {'precision': 0.7029379760609358, 'recall': 0.7985166872682324, 'f1': 0.7476851851851851, 'number': 809}
  • Header: {'precision': 0.3656716417910448, 'recall': 0.4117647058823529, 'f1': 0.3873517786561265, 'number': 119}
  • Question: {'precision': 0.7918552036199095, 'recall': 0.8215962441314554, 'f1': 0.8064516129032258, 'number': 1065}
  • Overall Precision: 0.7275
  • Overall Recall: 0.7878
  • Overall F1: 0.7564
  • Overall Accuracy: 0.8052

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
1.8328 1.0 10 1.6269 {'precision': 0.012106537530266344, 'recall': 0.012360939431396786, 'f1': 0.012232415902140673, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.2, 'recall': 0.15492957746478872, 'f1': 0.17460317460317462, 'number': 1065} 0.1060 0.0878 0.0960 0.3555
1.4745 2.0 20 1.2810 {'precision': 0.1425339366515837, 'recall': 0.1557478368355995, 'f1': 0.1488481984642646, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.44803982576228996, 'recall': 0.676056338028169, 'f1': 0.5389221556886227, 'number': 1065} 0.3394 0.4245 0.3772 0.5679
1.0973 3.0 30 0.9149 {'precision': 0.48193916349809884, 'recall': 0.6266996291718171, 'f1': 0.5448683503492746, 'number': 809} {'precision': 0.06521739130434782, 'recall': 0.025210084033613446, 'f1': 0.03636363636363636, 'number': 119} {'precision': 0.5712250712250713, 'recall': 0.7530516431924883, 'f1': 0.6496557310652086, 'number': 1065} 0.5244 0.6583 0.5838 0.7085
0.8318 4.0 40 0.7626 {'precision': 0.5760151085930123, 'recall': 0.754017305315204, 'f1': 0.6531049250535332, 'number': 809} {'precision': 0.21621621621621623, 'recall': 0.13445378151260504, 'f1': 0.16580310880829016, 'number': 119} {'precision': 0.6803418803418804, 'recall': 0.7474178403755869, 'f1': 0.7123042505592841, 'number': 1065} 0.6175 0.7135 0.6620 0.7648
0.6741 5.0 50 0.7031 {'precision': 0.630457933972311, 'recall': 0.7317676143386898, 'f1': 0.6773455377574371, 'number': 809} {'precision': 0.29473684210526313, 'recall': 0.23529411764705882, 'f1': 0.2616822429906542, 'number': 119} {'precision': 0.6975060337892196, 'recall': 0.8140845070422535, 'f1': 0.7512998266897747, 'number': 1065} 0.6531 0.7461 0.6965 0.7841
0.5677 6.0 60 0.6814 {'precision': 0.6348884381338742, 'recall': 0.7737948084054388, 'f1': 0.6974930362116991, 'number': 809} {'precision': 0.3125, 'recall': 0.21008403361344538, 'f1': 0.25125628140703515, 'number': 119} {'precision': 0.7495479204339964, 'recall': 0.7784037558685446, 'f1': 0.7637033625057578, 'number': 1065} 0.6814 0.7426 0.7107 0.7808
0.4939 7.0 70 0.6524 {'precision': 0.6795698924731183, 'recall': 0.7812113720642769, 'f1': 0.7268545140885566, 'number': 809} {'precision': 0.3148148148148148, 'recall': 0.2857142857142857, 'f1': 0.29955947136563876, 'number': 119} {'precision': 0.76657824933687, 'recall': 0.8140845070422535, 'f1': 0.7896174863387978, 'number': 1065} 0.7068 0.7692 0.7367 0.7968
0.4355 8.0 80 0.6496 {'precision': 0.6659707724425887, 'recall': 0.788627935723115, 'f1': 0.7221279003961517, 'number': 809} {'precision': 0.3055555555555556, 'recall': 0.2773109243697479, 'f1': 0.2907488986784141, 'number': 119} {'precision': 0.7633851468048359, 'recall': 0.8300469483568075, 'f1': 0.7953216374269007, 'number': 1065} 0.6992 0.7802 0.7375 0.8021
0.3922 9.0 90 0.6662 {'precision': 0.6965442764578834, 'recall': 0.7972805933250927, 'f1': 0.7435158501440923, 'number': 809} {'precision': 0.3170731707317073, 'recall': 0.3277310924369748, 'f1': 0.32231404958677684, 'number': 119} {'precision': 0.7817028985507246, 'recall': 0.8103286384976526, 'f1': 0.7957584140156754, 'number': 1065} 0.7185 0.7762 0.7463 0.8034
0.3788 10.0 100 0.6630 {'precision': 0.7004357298474946, 'recall': 0.7948084054388134, 'f1': 0.7446438911407064, 'number': 809} {'precision': 0.36283185840707965, 'recall': 0.3445378151260504, 'f1': 0.35344827586206895, 'number': 119} {'precision': 0.7727674624226348, 'recall': 0.8206572769953052, 'f1': 0.795992714025501, 'number': 1065} 0.7206 0.7817 0.7499 0.8053
0.3263 11.0 110 0.6684 {'precision': 0.6940540540540541, 'recall': 0.7935723114956736, 'f1': 0.740484429065744, 'number': 809} {'precision': 0.3333333333333333, 'recall': 0.35294117647058826, 'f1': 0.34285714285714286, 'number': 119} {'precision': 0.7744755244755245, 'recall': 0.831924882629108, 'f1': 0.8021729289271163, 'number': 1065} 0.7153 0.7878 0.7498 0.8053
0.3098 12.0 120 0.6795 {'precision': 0.7033805888767721, 'recall': 0.7972805933250927, 'f1': 0.7473928157589804, 'number': 809} {'precision': 0.359375, 'recall': 0.3865546218487395, 'f1': 0.3724696356275304, 'number': 119} {'precision': 0.7887579329102448, 'recall': 0.8169014084507042, 'f1': 0.8025830258302583, 'number': 1065} 0.7267 0.7832 0.7539 0.8073
0.2976 13.0 130 0.6857 {'precision': 0.6913183279742765, 'recall': 0.7972805933250927, 'f1': 0.74052812858783, 'number': 809} {'precision': 0.3524590163934426, 'recall': 0.36134453781512604, 'f1': 0.35684647302904565, 'number': 119} {'precision': 0.7831858407079646, 'recall': 0.8309859154929577, 'f1': 0.806378132118451, 'number': 1065} 0.7199 0.7893 0.7530 0.8042
0.277 14.0 140 0.6918 {'precision': 0.7022900763358778, 'recall': 0.796044499381953, 'f1': 0.7462340672074159, 'number': 809} {'precision': 0.36363636363636365, 'recall': 0.40336134453781514, 'f1': 0.38247011952191234, 'number': 119} {'precision': 0.7848214285714286, 'recall': 0.8253521126760563, 'f1': 0.8045766590389016, 'number': 1065} 0.7243 0.7883 0.7549 0.8038
0.2706 15.0 150 0.6945 {'precision': 0.7029379760609358, 'recall': 0.7985166872682324, 'f1': 0.7476851851851851, 'number': 809} {'precision': 0.3656716417910448, 'recall': 0.4117647058823529, 'f1': 0.3873517786561265, 'number': 119} {'precision': 0.7918552036199095, 'recall': 0.8215962441314554, 'f1': 0.8064516129032258, 'number': 1065} 0.7275 0.7878 0.7564 0.8052

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.9.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for poojadamavarapu123/layoutlm-funsd

Finetuned
(184)
this model