lilt-en-funsd

This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5659
  • Answer: {'precision': 0.8772348033373063, 'recall': 0.9008567931456548, 'f1': 0.8888888888888888, 'number': 817}
  • Header: {'precision': 0.5826086956521739, 'recall': 0.5630252100840336, 'f1': 0.5726495726495726, 'number': 119}
  • Question: {'precision': 0.9031657355679702, 'recall': 0.9006499535747446, 'f1': 0.901906090190609, 'number': 1077}
  • Overall Precision: 0.8743
  • Overall Recall: 0.8808
  • Overall F1: 0.8775
  • Overall Accuracy: 0.8124

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • training_steps: 2500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
0.4346 10.5263 200 0.9773 {'precision': 0.8395480225988701, 'recall': 0.9094247246022031, 'f1': 0.8730904817861339, 'number': 817} {'precision': 0.5, 'recall': 0.5798319327731093, 'f1': 0.5369649805447471, 'number': 119} {'precision': 0.8733031674208145, 'recall': 0.8960074280408542, 'f1': 0.8845096241979835, 'number': 1077} 0.8351 0.8828 0.8582 0.8061
0.0497 21.0526 400 1.2488 {'precision': 0.8596698113207547, 'recall': 0.8922888616891065, 'f1': 0.8756756756756756, 'number': 817} {'precision': 0.5225225225225225, 'recall': 0.48739495798319327, 'f1': 0.5043478260869565, 'number': 119} {'precision': 0.8717720391807658, 'recall': 0.9090064995357474, 'f1': 0.8899999999999999, 'number': 1077} 0.8482 0.8773 0.8625 0.8099
0.0156 31.5789 600 1.3758 {'precision': 0.8043478260869565, 'recall': 0.9510403916768666, 'f1': 0.8715647784632641, 'number': 817} {'precision': 0.5064102564102564, 'recall': 0.6638655462184874, 'f1': 0.5745454545454546, 'number': 119} {'precision': 0.9111328125, 'recall': 0.8662952646239555, 'f1': 0.8881485007139458, 'number': 1077} 0.8336 0.8887 0.8603 0.7875
0.0072 42.1053 800 1.4574 {'precision': 0.8394648829431438, 'recall': 0.9216646266829865, 'f1': 0.8786464410735123, 'number': 817} {'precision': 0.525, 'recall': 0.5294117647058824, 'f1': 0.5271966527196653, 'number': 119} {'precision': 0.888268156424581, 'recall': 0.8857938718662952, 'f1': 0.8870292887029289, 'number': 1077} 0.8465 0.8793 0.8626 0.8020
0.0039 52.6316 1000 1.6322 {'precision': 0.8585365853658536, 'recall': 0.8616891064871481, 'f1': 0.8601099572388515, 'number': 817} {'precision': 0.5185185185185185, 'recall': 0.5882352941176471, 'f1': 0.5511811023622047, 'number': 119} {'precision': 0.8744354110207768, 'recall': 0.8987929433611885, 'f1': 0.8864468864468864, 'number': 1077} 0.8448 0.8654 0.8550 0.7783
0.0037 63.1579 1200 1.6199 {'precision': 0.8275862068965517, 'recall': 0.9106487148102815, 'f1': 0.867132867132867, 'number': 817} {'precision': 0.5384615384615384, 'recall': 0.5882352941176471, 'f1': 0.5622489959839357, 'number': 119} {'precision': 0.9043478260869565, 'recall': 0.8690807799442897, 'f1': 0.8863636363636365, 'number': 1077} 0.8479 0.8693 0.8585 0.7901
0.0015 73.6842 1400 1.6549 {'precision': 0.8115015974440895, 'recall': 0.9326805385556916, 'f1': 0.867881548974943, 'number': 817} {'precision': 0.5769230769230769, 'recall': 0.5042016806722689, 'f1': 0.5381165919282511, 'number': 119} {'precision': 0.9180487804878049, 'recall': 0.8737233054781801, 'f1': 0.895337773549001, 'number': 1077} 0.8525 0.8758 0.8640 0.7951
0.001 84.2105 1600 1.6181 {'precision': 0.8700834326579261, 'recall': 0.8935128518971848, 'f1': 0.8816425120772947, 'number': 817} {'precision': 0.5833333333333334, 'recall': 0.5882352941176471, 'f1': 0.5857740585774059, 'number': 119} {'precision': 0.8992537313432836, 'recall': 0.8950789229340761, 'f1': 0.8971614704513727, 'number': 1077} 0.8685 0.8763 0.8724 0.8010
0.0007 94.7368 1800 1.5533 {'precision': 0.8535754824063564, 'recall': 0.9204406364749081, 'f1': 0.8857479387514723, 'number': 817} {'precision': 0.5677966101694916, 'recall': 0.5630252100840336, 'f1': 0.5654008438818565, 'number': 119} {'precision': 0.9200779727095516, 'recall': 0.8765088207985144, 'f1': 0.8977650974797908, 'number': 1077} 0.8706 0.8758 0.8732 0.8192
0.0004 105.2632 2000 1.5659 {'precision': 0.8772348033373063, 'recall': 0.9008567931456548, 'f1': 0.8888888888888888, 'number': 817} {'precision': 0.5826086956521739, 'recall': 0.5630252100840336, 'f1': 0.5726495726495726, 'number': 119} {'precision': 0.9031657355679702, 'recall': 0.9006499535747446, 'f1': 0.901906090190609, 'number': 1077} 0.8743 0.8808 0.8775 0.8124
0.0004 115.7895 2200 1.5713 {'precision': 0.8487584650112867, 'recall': 0.9204406364749081, 'f1': 0.8831473869641807, 'number': 817} {'precision': 0.6, 'recall': 0.5798319327731093, 'f1': 0.5897435897435898, 'number': 119} {'precision': 0.9030131826741996, 'recall': 0.8904363974001857, 'f1': 0.8966806919121085, 'number': 1077} 0.8628 0.8843 0.8734 0.8143
0.0001 126.3158 2400 1.5663 {'precision': 0.8584474885844748, 'recall': 0.9204406364749081, 'f1': 0.8883638511518015, 'number': 817} {'precision': 0.6095238095238096, 'recall': 0.5378151260504201, 'f1': 0.5714285714285715, 'number': 119} {'precision': 0.8955637707948244, 'recall': 0.8997214484679665, 'f1': 0.8976377952755905, 'number': 1077} 0.8652 0.8867 0.8759 0.8150

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Gbat/lilt-en-funsd

Finetuned
(51)
this model

Evaluation results