ethanbradley commited on
Commit
eda2c20
·
verified ·
1 Parent(s): 2bed6ac

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: table-question-answering
4
+ license: mit
5
+ datasets:
6
+ - ethanbradley/synfintabs
7
+ language:
8
+ - en
9
+ base_model:
10
+ - microsoft/layoutlm-base-uncased
11
+ ---
12
+
13
+ # FinTabQA-A4: Financial Table Question-Answering (A4)
14
+
15
+ A model for financial table question-answering using the [LayoutLM](https://huggingface.co/microsoft/layoutlm-base-uncased) architecture.
16
+
17
+ ## Quick start
18
+
19
+ To get started with FinTabQA-A4, load it, and a fast tokenizer, like you would any other Hugging Face Transformer model and tokenizer. Below is a minimum working example using the [SynFinTabs](https://huggingface.co/datasets/ethanbradley/synfintabs) dataset.
20
+
21
+ ```python3
22
+ >>> from typing import List, Tuple
23
+ >>> from datasets import load_dataset
24
+ >>> from transformers import LayoutLMForQuestionAnswering, LayoutLMTokenizerFast
25
+ >>> import torch
26
+ >>>
27
+ >>> synfintabs_dataset = load_dataset("ethanbradley/synfintabs")
28
+ >>> model = LayoutLMForQuestionAnswering.from_pretrained(
29
+ ... "ethanbradley/fintabqa-a4")
30
+ >>> tokenizer = LayoutLMTokenizerFast.from_pretrained(
31
+ ... "microsoft/layoutlm-base-uncased")
32
+ >>>
33
+ >>> def normalise_boxes(
34
+ ... boxes: List[List[int]],
35
+ ... old_image_size: Tuple[int, int],
36
+ ... new_image_size: Tuple[int, int]) -> List[List[int]]:
37
+ ... old_im_w, old_im_h = old_image_size
38
+ ... new_im_w, new_im_h = new_image_size
39
+ ...
40
+ ... return [[
41
+ ... max(min(int(x1 / old_im_w * new_im_w), new_im_w), 0),
42
+ ... max(min(int(y1 / old_im_h * new_im_h), new_im_h), 0),
43
+ ... max(min(int(x2 / old_im_w * new_im_w), new_im_w), 0),
44
+ ... max(min(int(y2 / old_im_h * new_im_h), new_im_h), 0)
45
+ ... ] for (x1, y1, x2, y2) in boxes]
46
+ >>>
47
+ >>> item = synfintabs_dataset['test'][0]
48
+ >>> question_dict = next(question for question in item['questions']
49
+ ... if question['id'] == item['question_id'])
50
+ >>> encoding = tokenizer(
51
+ ... question_dict['question'].split(),
52
+ ... item['ocr_results']['words'],
53
+ ... max_length=512,
54
+ ... padding="max_length",
55
+ ... truncation="only_second",
56
+ ... is_split_into_words=True,
57
+ ... return_token_type_ids=True,
58
+ ... return_tensors="pt")
59
+ >>>
60
+ >>> word_boxes = normalise_boxes(
61
+ ... item['ocr_results']['bboxes'],
62
+ ... item['image'].size,
63
+ ... (1000, 1000))
64
+ >>> token_boxes = []
65
+ >>>
66
+ >>> for i, s, w in zip(
67
+ ... encoding['input_ids'][0],
68
+ ... encoding.sequence_ids(0),
69
+ ... encoding.word_ids(0)):
70
+ ... if s == 1:
71
+ ... token_boxes.append(word_boxes[w])
72
+ ... elif i == tokenizer.sep_token_id:
73
+ ... token_boxes.append([1000] * 4)
74
+ ... else:
75
+ ... token_boxes.append([0] * 4)
76
+ >>>
77
+ >>> encoding['bbox'] = torch.tensor([token_boxes])
78
+ >>> outputs = model(**encoding)
79
+ >>> start = encoding.word_ids(0)[outputs['start_logits'].argmax(-1)]
80
+ >>> end = encoding.word_ids(0)[outputs['end_logits'].argmax(-1)]
81
+ >>>
82
+ >>> print(f"Target: {question_dict['answer']}")
83
+ Target: 6,980
84
+ >>>
85
+ >>> print(f"Prediction: {' '.join(item['ocr_results']['words'][start : end])}")
86
+ Prediction: 6,980
87
+ ```
88
+
89
+ ## Citation
90
+
91
+ If you use this model, please cite both the article using the citation below and the model itself.
92
+
93
+ ```bib
94
+ @inproceedings{bradley2026synfintabs,
95
+ title = {Syn{F}in{T}abs: A Dataset of Synthetic Financial Tables for Information and Table Extraction},
96
+ author = {Bradley, Ethan and Roman, Muhammad and Rafferty, Karen and Devereux, Barry},
97
+ year = 2026,
98
+ month = jan,
99
+ booktitle = {Document Analysis and Recognition -- ICDAR 2025 Workshops},
100
+ publisher = {Springer Nature Switzerland},
101
+ address = {Cham},
102
+ pages = {85--100},
103
+ doi = {10.1007/978-3-032-09371-4_6},
104
+ isbn = {978-3-032-09371-4},
105
+ editor = {Jin, Lianwen and Zanibbi, Richard and Eglin, Veronique},
106
+ abstract = {Table extraction from document images is a challenging AI problem, and labelled data for many content domains is difficult to come by. Existing table extraction datasets often focus on scientific tables due to the vast amount of academic articles that are readily available, along with their source code. However, there are significant layout and typographical differences between tables found across scientific, financial, and other domains. Current datasets often lack the words, and their positions, contained within the tables, instead relying on unreliable OCR to extract these features for training modern machine learning models on natural language processing tasks. Therefore, there is a need for a more general method of obtaining labelled data. We present SynFinTabs, a large-scale, labelled dataset of synthetic financial tables. Our hope is that our method of generating these synthetic tables is transferable to other domains. To demonstrate the effectiveness of our dataset in training models to extract information from table images, we create FinTabQA, a layout large language model trained on an extractive question-answering task. We test our model using real-world financial tables and compare it to a state-of-the-art generative model and discuss the results. We make the dataset, model, and dataset generation code publicly available (https://ethanbradley.co.uk/research/synfintabs).}
107
+ }
108
+ ```