layoutlm-funsd / README.md
leom21's picture
End of training
7a18724 verified
|
raw
history blame
9.37 kB
metadata
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
  - generated_from_trainer
datasets:
  - funsd
model-index:
  - name: layoutlm-funsd
    results: []

layoutlm-funsd

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6891
  • Answer: {'precision': 0.7296996662958843, 'recall': 0.8108776266996292, 'f1': 0.7681498829039812, 'number': 809}
  • Header: {'precision': 0.3620689655172414, 'recall': 0.35294117647058826, 'f1': 0.3574468085106383, 'number': 119}
  • Question: {'precision': 0.7939609236234458, 'recall': 0.8394366197183099, 'f1': 0.8160657234139663, 'number': 1065}
  • Overall Precision: 0.7436
  • Overall Recall: 0.7988
  • Overall F1: 0.7702
  • Overall Accuracy: 0.8108

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
1.8006 1.0 10 1.5983 {'precision': 0.015552099533437015, 'recall': 0.012360939431396786, 'f1': 0.013774104683195591, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.20512820512820512, 'recall': 0.12018779342723004, 'f1': 0.15156897572528122, 'number': 1065} 0.1089 0.0692 0.0847 0.3439
1.4817 2.0 20 1.2758 {'precision': 0.23966065747614, 'recall': 0.27935723114956734, 'f1': 0.2579908675799087, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.4002998500749625, 'recall': 0.5014084507042254, 'f1': 0.4451854939558149, 'number': 1065} 0.3338 0.3813 0.3560 0.5953
1.1479 3.0 30 0.9589 {'precision': 0.48957298907646474, 'recall': 0.6093943139678616, 'f1': 0.5429515418502202, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.5823389021479713, 'recall': 0.6873239436619718, 'f1': 0.6304909560723513, 'number': 1065} 0.5361 0.6147 0.5727 0.6955
0.872 4.0 40 0.8144 {'precision': 0.574447646493756, 'recall': 0.7391841779975278, 'f1': 0.6464864864864865, 'number': 809} {'precision': 0.075, 'recall': 0.025210084033613446, 'f1': 0.03773584905660377, 'number': 119} {'precision': 0.6640281442392261, 'recall': 0.7089201877934272, 'f1': 0.6857402361489555, 'number': 1065} 0.6114 0.6804 0.6440 0.7492
0.6944 5.0 50 0.7241 {'precision': 0.638682252922423, 'recall': 0.7428924598269468, 'f1': 0.6868571428571428, 'number': 809} {'precision': 0.1875, 'recall': 0.12605042016806722, 'f1': 0.1507537688442211, 'number': 119} {'precision': 0.674493927125506, 'recall': 0.7821596244131456, 'f1': 0.7243478260869566, 'number': 1065} 0.6423 0.7270 0.6820 0.7876
0.588 6.0 60 0.6902 {'precision': 0.6445115810674723, 'recall': 0.7911001236093943, 'f1': 0.7103218645948945, 'number': 809} {'precision': 0.265625, 'recall': 0.14285714285714285, 'f1': 0.18579234972677594, 'number': 119} {'precision': 0.7232219365895458, 'recall': 0.7924882629107981, 'f1': 0.7562724014336917, 'number': 1065} 0.6749 0.7531 0.7119 0.7922
0.5155 7.0 70 0.6651 {'precision': 0.6762820512820513, 'recall': 0.7824474660074165, 'f1': 0.7255014326647564, 'number': 809} {'precision': 0.22115384615384615, 'recall': 0.19327731092436976, 'f1': 0.2062780269058296, 'number': 119} {'precision': 0.7402597402597403, 'recall': 0.8028169014084507, 'f1': 0.7702702702702703, 'number': 1065} 0.6884 0.7582 0.7216 0.7979
0.4567 8.0 80 0.6544 {'precision': 0.682062298603652, 'recall': 0.7849196538936959, 'f1': 0.7298850574712644, 'number': 809} {'precision': 0.21359223300970873, 'recall': 0.18487394957983194, 'f1': 0.1981981981981982, 'number': 119} {'precision': 0.759515570934256, 'recall': 0.8244131455399061, 'f1': 0.790634849167042, 'number': 1065} 0.7009 0.7702 0.7339 0.8047
0.4044 9.0 90 0.6556 {'precision': 0.7029379760609358, 'recall': 0.7985166872682324, 'f1': 0.7476851851851851, 'number': 809} {'precision': 0.2621359223300971, 'recall': 0.226890756302521, 'f1': 0.24324324324324326, 'number': 119} {'precision': 0.7710320901994796, 'recall': 0.8347417840375587, 'f1': 0.8016230838593327, 'number': 1065} 0.7182 0.7837 0.7495 0.8089
0.3974 10.0 100 0.6652 {'precision': 0.7141292442497261, 'recall': 0.8059332509270705, 'f1': 0.7572590011614402, 'number': 809} {'precision': 0.30357142857142855, 'recall': 0.2857142857142857, 'f1': 0.2943722943722944, 'number': 119} {'precision': 0.7917414721723519, 'recall': 0.828169014084507, 'f1': 0.8095456631482332, 'number': 1065} 0.7331 0.7868 0.7590 0.8094
0.3327 11.0 110 0.6705 {'precision': 0.720620842572062, 'recall': 0.8034610630407911, 'f1': 0.7597895967270601, 'number': 809} {'precision': 0.336283185840708, 'recall': 0.31932773109243695, 'f1': 0.32758620689655166, 'number': 119} {'precision': 0.7891939769707705, 'recall': 0.8366197183098592, 'f1': 0.812215132178669, 'number': 1065} 0.7365 0.7923 0.7634 0.8096
0.3194 12.0 120 0.6719 {'precision': 0.7239057239057239, 'recall': 0.7972805933250927, 'f1': 0.7588235294117647, 'number': 809} {'precision': 0.3853211009174312, 'recall': 0.35294117647058826, 'f1': 0.36842105263157904, 'number': 119} {'precision': 0.7911111111111111, 'recall': 0.8356807511737089, 'f1': 0.812785388127854, 'number': 1065} 0.7421 0.7913 0.7659 0.8115
0.301 13.0 130 0.6828 {'precision': 0.7256637168141593, 'recall': 0.8108776266996292, 'f1': 0.7659077641564506, 'number': 809} {'precision': 0.41414141414141414, 'recall': 0.3445378151260504, 'f1': 0.3761467889908257, 'number': 119} {'precision': 0.8005415162454874, 'recall': 0.8328638497652582, 'f1': 0.8163828808099403, 'number': 1065} 0.7504 0.7948 0.7719 0.8099
0.286 14.0 140 0.6856 {'precision': 0.7279821627647715, 'recall': 0.8071693448702101, 'f1': 0.7655334114888628, 'number': 809} {'precision': 0.3853211009174312, 'recall': 0.35294117647058826, 'f1': 0.36842105263157904, 'number': 119} {'precision': 0.7931034482758621, 'recall': 0.8422535211267606, 'f1': 0.8169398907103825, 'number': 1065} 0.7450 0.7988 0.7709 0.8108
0.2789 15.0 150 0.6891 {'precision': 0.7296996662958843, 'recall': 0.8108776266996292, 'f1': 0.7681498829039812, 'number': 809} {'precision': 0.3620689655172414, 'recall': 0.35294117647058826, 'f1': 0.3574468085106383, 'number': 119} {'precision': 0.7939609236234458, 'recall': 0.8394366197183099, 'f1': 0.8160657234139663, 'number': 1065} 0.7436 0.7988 0.7702 0.8108

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1