Datasets:

ArXiv:
License:

feat: Sync evaluation codes and leaderboard with the latest version

#5
by DaekyuKwon - opened
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ __pycache__/
2
+ *.py[cod]
3
+ predictions/
4
+ *.eval.json
README.md CHANGED
@@ -201,8 +201,16 @@ It contains a wide range of document layouts, from text-heavy pages to complex t
201
  The dataset comes with annotations for layout elements such as paragraphs, headings, and tables.
202
 
203
  The following options are required for evaluation:
204
- - **`--ref_path`**: Specifies the path to the reference JSON file, predefined as `dataset/reference.json` for evaluation purposes.
205
- - **`--pred_path`**: Indicates the path to the predicted JSON file. You can either use a sample result located in the `dataset/sample_results` folder, or generate your own by using the inference script provided in the `scripts` folder.
 
 
 
 
 
 
 
 
206
 
207
  #### Element detection and serialization evaluation
208
  This evaluation will compute the NID metric to assess how accurately the text in the document is recognized considering the structure and order of the document layout.
@@ -210,8 +218,8 @@ To evaluate the document layout results, run the following command:
210
 
211
  ```
212
  $ python evaluate.py \
213
- --ref_path <path to the reference json file> \
214
- --pred_path <path to the predicted json file> \
215
  --mode layout
216
  ```
217
 
@@ -222,21 +230,22 @@ To evaluate table recognition performance, use the following command:
222
 
223
  ```
224
  $ python evaluate.py \
225
- --ref_path <path to the reference json file> \
226
- --pred_path <path to the predicted json file> \
227
  --mode table
228
  ```
229
 
230
  # Leaderboard
231
  <div style="max-width: 800px; width: 100%; overflow-x: auto; margin: 0 auto;">
232
 
233
- | Source | Request date | TEDS ↑ | TEDS-S ↑ | NID ↑ | Avg. Time (secs) ↓ |
234
- |:---------------------|:------------:|-----------:|----------:|------------:|------------:|
235
- | upstage | 2024-10-24 | **93.48** | **94.16** | **97.02** | **3.79** |
236
- | aws | 2024-10-24 | 88.05 | 90.79 | 96.71 | 14.47 |
237
- | llamaparse | 2024-10-24 | 74.57 | 76.34 | 92.82 | 4.14 |
238
- | unstructured | 2024-10-24 | 65.56 | 70.00 | 91.18 | 13.14 |
239
- | google | 2024-10-24 | 66.13 | 71.58 | 90.86 | 5.85 |
240
- | microsoft | 2024-10-24 | 87.19 | 89.75 | 87.69 | 4.44 |
 
241
 
242
  </div>
 
201
  The dataset comes with annotations for layout elements such as paragraphs, headings, and tables.
202
 
203
  The following options are required for evaluation:
204
+ - **`--ref-path`**: Specifies the path to the reference JSON file, predefined as `dataset/reference.json` for evaluation purposes.
205
+ - **`--pred-path`**: Indicates the path to the predicted JSON file. You can either use a sample result located in the `dataset/sample_results` folder, or generate your own by using the inference script provided in the `scripts` folder.
206
+ - **`--mode`**: One of `layout`, `table`, `all`, or `speed` (default: `all`).
207
+ - `layout` — NID for element detection and serialization
208
+ - `table` — TEDS and TEDS-S for table structure recognition
209
+ - `all` — run both `layout` and `table`
210
+ - `speed` — average latency and throughput from `time_sec` fields
211
+ - **`--evaluate-merged-table`**: Optional. Enables merged-table preprocessing when the prediction or reference declares a `merged_tables` key.
212
+
213
+ Per-image scores and metric summaries are written next to the prediction file as `<pred-file>.eval.json`.
214
 
215
  #### Element detection and serialization evaluation
216
  This evaluation will compute the NID metric to assess how accurately the text in the document is recognized considering the structure and order of the document layout.
 
218
 
219
  ```
220
  $ python evaluate.py \
221
+ --ref-path <path to the reference json file> \
222
+ --pred-path <path to the predicted json file> \
223
  --mode layout
224
  ```
225
 
 
230
 
231
  ```
232
  $ python evaluate.py \
233
+ --ref-path <path to the reference json file> \
234
+ --pred-path <path to the predicted json file> \
235
  --mode table
236
  ```
237
 
238
  # Leaderboard
239
  <div style="max-width: 800px; width: 100%; overflow-x: auto; margin: 0 auto;">
240
 
241
+ | Source | Request date | TEDS ↑ | TEDS-S ↑ | NID ↑ | Avg. Time (secs) ↓ |
242
+ |:----------------------------|:------------:|----------:|----------:|----------:|-------------------:|
243
+ | upstage (enhanced) | 2026-02-09 | 95.59 | **97.62** | **96.62** | 7.56 |
244
+ | upstage (standard) | 2026-02-09 | **96.06** | 97.25 | 96.29 | 3.77 |
245
+ | aws | 2026-02-09 | 95.48 | 96.99 | 95.97 | 7.95 |
246
+ | unstructured | 2026-02-09 | 80.26 | 89.51 | 91.78 | 6.80 |
247
+ | llamaparse | 2026-02-09 | 90.73 | 93.20 | 90.53 | 10.88 |
248
+ | microsoft | 2026-02-09 | 77.85 | 85.74 | 87.03 | **3.39** |
249
+ | google | 2026-02-09 | 78.30 | 80.71 | 82.17 | 37.00 |
250
 
251
  </div>
dataset/reference.json CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/{aws_241024.json → aws_260209.json} RENAMED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/google_241024.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/google_260209.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/llamaparse_241024.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/llamaparse_260209.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/microsoft_241024.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/microsoft_260209.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/unstructured_241024.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/unstructured_260209.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/upstage_enhanced_260209.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset/sample_results/{upstage_241024.json → upstage_standard_260209.json} RENAMED
The diff for this file is too large to render. See raw diff
 
evaluate.py CHANGED
@@ -1,33 +1,56 @@
1
  import argparse
 
 
2
 
3
- from src.utils import read_file, check_data_validity
4
  from src.layout_evaluation import evaluate_layout
5
  from src.table_evaluation import evaluate_table
6
 
7
-
8
  def parse_args():
9
  parser = argparse.ArgumentParser(description="Arguments for evaluation")
10
  parser.add_argument(
11
- "--ref_path",
12
  type=str, required=True,
13
  help="Path to the ground truth file"
14
  )
15
  parser.add_argument(
16
- "--pred_path",
17
  type=str, required=True,
18
  help="Path to the prediction file"
19
  )
20
  parser.add_argument(
21
- "--ignore_classes_for_layout",
22
  type=list, default=["figure", "table", "chart"],
23
  help="List of layout classes to ignore. This is used only for layout evaluation."
24
  )
 
 
 
 
 
 
25
  parser.add_argument(
26
  "--mode",
27
- type=str, default="layout",
28
- help="Mode for evaluation (layout/table)"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  )
30
-
31
  return parser.parse_args()
32
 
33
 
@@ -42,20 +65,188 @@ def main():
42
  label_data = read_file(args.ref_path)
43
  pred_data = read_file(args.pred_path)
44
 
45
- check_data_validity(label_data, pred_data)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- if args.mode == "layout":
48
- score = evaluate_layout(
49
- label_data, pred_data,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ignore_classes=args.ignore_classes_for_layout,
 
51
  )
52
  print(f"NID Score: {score:.4f}")
53
- elif args.mode == "table":
54
- teds_score, teds_s_score = evaluate_table(label_data, pred_data)
55
- print(f"TEDS Score: {teds_score:.4f}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  print(f"TEDS-S Score: {teds_s_score:.4f}")
57
- else:
58
- raise ValueError(f"{args.mode} mode not supported")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
 
61
  if __name__ == "__main__":
 
1
  import argparse
2
+ import json
3
+ from pathlib import Path
4
 
5
+ from src.utils import read_file, check_data_validity, preprocess_merged_tables
6
  from src.layout_evaluation import evaluate_layout
7
  from src.table_evaluation import evaluate_table
8
 
 
9
  def parse_args():
10
  parser = argparse.ArgumentParser(description="Arguments for evaluation")
11
  parser.add_argument(
12
+ "--ref-path",
13
  type=str, required=True,
14
  help="Path to the ground truth file"
15
  )
16
  parser.add_argument(
17
+ "--pred-path",
18
  type=str, required=True,
19
  help="Path to the prediction file"
20
  )
21
  parser.add_argument(
22
+ "--ignore-classes-for-layout",
23
  type=list, default=["figure", "table", "chart"],
24
  help="List of layout classes to ignore. This is used only for layout evaluation."
25
  )
26
+ parser.add_argument(
27
+ "--filter-by-gt-area",
28
+ action=argparse.BooleanOptionalAction,
29
+ default=True,
30
+ help="Filter out prediction text within GT ignored regions (default: True). Use --no-filter-by-gt-area to disable."
31
+ )
32
  parser.add_argument(
33
  "--mode",
34
+ type=str, default="all",
35
+ choices=["layout", "table", "all", "speed"],
36
+ help="Mode for evaluation (layout/table/all/speed). 'all' runs both layout and table evaluations. 'speed' evaluates latency and throughput."
37
+ )
38
+ parser.add_argument(
39
+ "--min-match-score",
40
+ type=float, default=0.1,
41
+ help="Minimum match score for table evaluation"
42
+ )
43
+ parser.add_argument(
44
+ "--max-workers",
45
+ type=int, default=8,
46
+ help="Maximum number of workers for table evaluation"
47
+ )
48
+ parser.add_argument(
49
+ "--evaluate-merged-table",
50
+ action=argparse.BooleanOptionalAction,
51
+ default=False,
52
+ help="Enable merged table evaluation mode. When enabled, tables specified in 'merged_tables' will be preprocessed and merged before evaluation (default: False)."
53
  )
 
54
  return parser.parse_args()
55
 
56
 
 
65
  label_data = read_file(args.ref_path)
66
  pred_data = read_file(args.pred_path)
67
 
68
+ # Apply merged table preprocessing if enabled
69
+ if args.evaluate_merged_table:
70
+ print("Merged table evaluation mode enabled")
71
+ print("Preprocessing ground truth data (merging tables)...")
72
+ label_data = preprocess_merged_tables(label_data)
73
+
74
+ # Check if prediction data has merged_tables key
75
+ has_merged_tables_in_pred = any(
76
+ isinstance(doc, dict) and "merged_tables" in doc
77
+ for doc in pred_data.values()
78
+ )
79
+
80
+ if has_merged_tables_in_pred:
81
+ print("Preprocessing prediction data (merging tables)...")
82
+ pred_data = preprocess_merged_tables(pred_data)
83
+ else:
84
+ print("Prediction data does not contain 'merged_tables' key - skipping preprocessing (assuming already merged)")
85
 
86
+ valid_keys, error_keys, missing_keys = check_data_validity(label_data, pred_data)
87
+
88
+ # Report data quality statistics
89
+ total_gt_samples = len(label_data)
90
+ print(f"Total GT samples: {total_gt_samples}")
91
+ print(f"Valid predictions: {len(valid_keys)}")
92
+ print(f"Predictions with errors: {len(error_keys)}")
93
+ print(f"Missing predictions: {len(missing_keys)}")
94
+
95
+ if error_keys:
96
+ print(f"\nSkipping {len(error_keys)} samples with errors:")
97
+ for key in error_keys[:5]: # Show first 5
98
+ print(f" - {key}")
99
+ if len(error_keys) > 5:
100
+ print(f" ... and {len(error_keys) - 5} more")
101
+
102
+ if missing_keys:
103
+ print(f"\nWarning: {len(missing_keys)} samples missing from predictions")
104
+
105
+ if not valid_keys:
106
+ raise ValueError("No valid predictions found. Cannot perform evaluation.")
107
+
108
+ print("-" * 50)
109
+
110
+ # Filter data to only include valid keys
111
+ filtered_label_data = {k: label_data[k] for k in valid_keys}
112
+ filtered_pred_data = {k: pred_data[k] for k in valid_keys}
113
+
114
+ # Prepare output file path
115
+ pred_path_obj = Path(args.pred_path)
116
+ eval_output_path = pred_path_obj.with_suffix(".eval.json")
117
+
118
+ # Collect evaluation results
119
+ eval_results = {
120
+ "ref_path": args.ref_path,
121
+ "pred_path": args.pred_path,
122
+ "mode": args.mode,
123
+ "evaluate_merged_table": args.evaluate_merged_table,
124
+ "data_statistics": {
125
+ "total_gt_samples": total_gt_samples,
126
+ "valid_predictions": len(valid_keys),
127
+ "error_predictions": len(error_keys),
128
+ "missing_predictions": len(missing_keys),
129
+ },
130
+ "per_image_results": {}
131
+ }
132
+
133
+ if args.mode == "layout" or args.mode == "all":
134
+ if args.mode == "all":
135
+ print("=" * 50)
136
+ print("Layout Evaluation")
137
+ print("=" * 50)
138
+ score, per_image_layout_scores = evaluate_layout(
139
+ filtered_label_data, filtered_pred_data,
140
  ignore_classes=args.ignore_classes_for_layout,
141
+ filter_by_gt_area=args.filter_by_gt_area,
142
  )
143
  print(f"NID Score: {score:.4f}")
144
+ eval_results["layout"] = {
145
+ "nid_score": score,
146
+ "ignore_classes": args.ignore_classes_for_layout,
147
+ "filter_by_gt_area": args.filter_by_gt_area,
148
+ }
149
+
150
+ # Merge per-image layout scores into per_image_results
151
+ for image_key, scores_dict in per_image_layout_scores.items():
152
+ if image_key not in eval_results["per_image_results"]:
153
+ eval_results["per_image_results"][image_key] = {}
154
+ eval_results["per_image_results"][image_key]["layout"] = scores_dict
155
+
156
+ if args.filter_by_gt_area:
157
+ print("Note: Spatial filtering by GT area is enabled (predictions within GT ignored regions are filtered)")
158
+ if args.mode == "all":
159
+ print()
160
+
161
+ if args.mode == "table" or args.mode == "all":
162
+ if args.mode == "all":
163
+ print("=" * 50)
164
+ print("Table Evaluation")
165
+ print("=" * 50)
166
+ table_f1_score, teds_score, teds_s_score, per_image_table_scores = evaluate_table(
167
+ filtered_label_data, filtered_pred_data,
168
+ min_match_score=args.min_match_score,
169
+ max_workers=args.max_workers,
170
+ )
171
+ print(f"Table F1 Score: {table_f1_score:.4f}")
172
  print(f"TEDS-S Score: {teds_s_score:.4f}")
173
+ print(f"TEDS Score: {teds_score:.4f}")
174
+ eval_results["table"] = {
175
+ "table_f1_score": table_f1_score,
176
+ "teds_score": teds_score,
177
+ "teds_s_score": teds_s_score,
178
+ "min_match_score": args.min_match_score,
179
+ }
180
+
181
+ # Merge per-image table scores into per_image_results
182
+ for image_key, scores_dict in per_image_table_scores.items():
183
+ if image_key not in eval_results["per_image_results"]:
184
+ eval_results["per_image_results"][image_key] = {}
185
+ eval_results["per_image_results"][image_key]["table"] = scores_dict
186
+
187
+ if args.mode == "speed" or args.mode == "all":
188
+ if args.mode == "all":
189
+ print("=" * 50)
190
+ print("Speed Evaluation")
191
+ print("=" * 50)
192
+
193
+ # Calculate latency and throughput from time_sec fields (including errors)
194
+ total_time = 0.0
195
+ count = 0
196
+
197
+ for image_key, image_data in pred_data.items():
198
+ if image_key == "_metadata":
199
+ continue # Skip metadata entry
200
+ if isinstance(image_data, dict) and "time_sec" in image_data:
201
+ total_time += image_data["time_sec"]
202
+ count += 1
203
+
204
+ if count > 0:
205
+ avg_latency = total_time / count
206
+ sequential_throughput = 60.0 / avg_latency if avg_latency > 0 else 0.0
207
+
208
+ print(f"Average Latency: {avg_latency:.4f} sec/image")
209
+ print(f"Sequential Throughput: {sequential_throughput:.2f} images/min (based on avg latency)")
210
+ print(f"Total Images: {count}")
211
+ print(f"Sum of Latencies: {total_time:.2f} sec")
212
+
213
+ eval_results["speed"] = {
214
+ "avg_latency_sec": avg_latency,
215
+ "sequential_throughput_per_minute": sequential_throughput,
216
+ "total_images": count,
217
+ "sum_of_latencies_sec": total_time,
218
+ }
219
+
220
+ # Report concurrent throughput from inference metadata if available
221
+ metadata = pred_data.get("_metadata")
222
+ if metadata and "total_elapsed_time_sec" in metadata:
223
+ elapsed_time = metadata["total_elapsed_time_sec"]
224
+ concurrent_limit = metadata.get("concurrent_limit")
225
+ num_files = metadata.get("num_files", count)
226
+ concurrent_throughput = (num_files / elapsed_time) * 60 if elapsed_time > 0 else 0.0
227
+
228
+ print(f"\nConcurrent Throughput: {concurrent_throughput:.2f} images/min (wall-clock time)")
229
+ print(f" - Elapsed Time: {elapsed_time:.2f} sec")
230
+ if concurrent_limit:
231
+ print(f" - Concurrency: {concurrent_limit}")
232
+
233
+ eval_results["speed"]["concurrent_throughput_per_minute"] = concurrent_throughput
234
+ eval_results["speed"]["elapsed_time_sec"] = elapsed_time
235
+ eval_results["speed"]["concurrent_limit"] = concurrent_limit
236
+ else:
237
+ print("Warning: No time_sec fields found in prediction data")
238
+ eval_results["speed"] = {
239
+ "avg_latency_sec": None,
240
+ "sequential_throughput_per_minute": None,
241
+ "total_images": 0,
242
+ "sum_of_latencies_sec": 0.0,
243
+ }
244
+
245
+ # Save evaluation results to JSON file
246
+ with open(eval_output_path, "w", encoding="utf-8") as f:
247
+ json.dump(eval_results, f, indent=2, ensure_ascii=False)
248
+
249
+ print(f"\nEvaluation results saved to: {eval_output_path}")
250
 
251
 
252
  if __name__ == "__main__":
requirements.txt CHANGED
@@ -1,4 +1,7 @@
1
  rapidfuzz==3.6.1
2
  distance==0.1.3
3
  apted==1.0.3
4
- lxml==5.1.0
 
 
 
 
1
  rapidfuzz==3.6.1
2
  distance==0.1.3
3
  apted==1.0.3
4
+ lxml==5.1.0
5
+ numpy
6
+ scipy
7
+ shapely
scripts/README.md CHANGED
@@ -1,125 +1,273 @@
1
  # Document Parsing Models - Inference Guide
 
2
  ## Overview
3
- The scripts in this folder allow users to extract structured data from unstructured documents using different document parsing services and libraries.
4
- Each service follows a standard installation procedure and provides an `infer_*` script to perform inference on PDF or Image samples.
5
 
6
- You can choose from document parsing products such as **Upstage DP**, **AWS Textract**, **Google Document AI**, **Microsoft Azure Form Recognizer**, **LlamaParse**, or **Unstructured**.
7
- Most of these services require an API key for access, so ensure you follow specific setup instructions for each product to configure the environment correctly.
8
 
9
- Each service generates a JSON output file in a consistent format.
10
- You can find detailed information about the output format [here](https://github.com/UpstageAI/document-parse-benchmark-private?tab=readme-ov-file#dataset-format).
11
 
 
12
 
13
- ## Upstage
14
 
15
- Follow the [official Upstage DP Documentation](https://developers.upstage.ai/docs/apis/document-parse) to set up Upstage for Document Parsing.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- **Note:** Ensure that the `UPSTAGE_ENDPOINT` and `UPSTAGE_API_KEY` variables are set up to run the code.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- Use the script below to make an inference:
20
  ```
21
- $ python infer_upstage.py \
22
- --data_path <path to the benchmark dataset> \
23
- --save_path <path to save the .json file>
 
 
 
24
  ```
25
 
26
- ## AWS
27
- To use AWS Textract for document parsing, install AWS CLI and Boto3 for API interaction:
 
 
28
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
30
- $ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
31
- $ unzip awscliv2.zip
32
- $ sudo ./aws/install
33
- $ aws configure
34
- $ pip install boto3
35
- ```
36
- Refer to the [AWS Textract Documentation](https://docs.aws.amazon.com/en_us/textract/latest/dg/getting-started.html) for detailed instructions.
37
 
38
- **Note:** To run the AWS inference code, you need to set the following variables: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, and `AWS_S3_BUCKET_NAME`.
 
 
 
 
 
 
 
 
 
 
 
39
 
40
- Use the script below to make an inference:
 
 
 
41
  ```
42
- $ python infer_aws.py \
43
- --data_path <path to the benchmark dataset> \
44
- --save_path <path to save the .json file>
 
 
 
 
 
 
45
  ```
46
 
47
- ## Google
48
- Install Google Cloud SDK and Google Document AI for document parsing on Google's platform:
 
 
49
 
50
- ```
51
- $ apt-get install apt-transport-https ca-certificates gnupg curl
52
- $ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
53
- $ echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
54
- $ apt-get update && apt-get install google-cloud-cli
55
- $ gcloud init
56
- $ pip install google-cloud-documentai
 
 
 
 
57
  ```
58
 
59
- More information can be found in the [Google Document AI Documentation](https://console.cloud.google.com/ai/document-ai?hl=en)
60
 
61
- **Note:** To run the Google inference code, you need to set the following variables: `GOOGLE_PROJECT_ID`, `GOOGLE_PROCESSOR_ID`, `GOOGLE_LOCATION`, and `GOOGLE_ENDPOINT`.
 
 
 
 
 
 
62
 
63
- Use the script below to make an inference:
 
 
 
 
64
  ```
65
- $ python infer_google.py \
66
- --data_path <path to the benchmark dataset> \
67
- --save_path <path to save the .json file>
 
 
 
 
 
 
 
 
 
 
 
 
68
  ```
69
 
70
- ## LlamaParse
71
- Refer to the [official LlamaParse Documentation](https://docs.cloud.llamaindex.ai/category/API/parsing) to install and use LlamaParse for document analysis.
72
 
73
- **Note:** Ensure that the `LLAMAPARSE_API_KEY`, `LLAMAPARSE_POST_URL`, and `LLAMAPARSE_GET_URL` variables are set before running the code.
 
 
 
 
 
 
 
74
 
75
- Use the script below to make an inference:
 
 
 
 
76
  ```
77
- $ python infer_llamaparse.py \
78
- --data_path <path to the benchmark dataset> \
79
- --save_path <path to save the .json file>
 
 
 
 
 
80
  ```
81
 
82
- ## Microsoft
83
- Install the Azure AI Form Recognizer SDK:
 
 
 
 
84
  ```
85
- $ pip install azure-ai-formrecognizer==3.3.0
 
 
 
 
 
86
  ```
87
- See the [Microsoft Azure Form Recognizer Documentation](https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-python) for additional details.
88
 
89
- **Note:** Set the `MICROSOFT_API_KEY` and `MICROSOFT_ENDPOINT` variables before running the code.
 
 
 
 
90
 
91
- Use the script below to make an inference:
 
 
 
 
92
  ```
93
- $ python infer_microsoft.py \
94
- --data_path <path to the benchmark dataset> \
95
- --save_path <path to save the .json file>
 
 
 
 
96
  ```
97
 
 
 
 
 
 
 
 
 
 
 
98
  ## Unstructured
99
 
100
- To handle various document formats with Unstructured, follow the steps below:
101
- ```
102
- $ pip install "unstructured-client"
 
 
 
 
103
  ```
104
- Detailed installation instructions can be found [here](https://docs.unstructured.io/api-reference/api-services/sdk-python).
105
 
106
- **Note:** To run the Unstructured inference code, you must set the `UNSTRUCTURED_API_KEY` and `UNSTRUCTURED_URL` variables.
107
 
108
- Use the script below to make an inference:
 
 
 
109
  ```
110
- $ python infer_unstructured.py \
111
- --data_path <path to the benchmark dataset> \
112
- --save_path <path to save the .json file>
 
 
 
113
  ```
114
 
115
- # Standardize Layout Class Mapping
116
- Within each `infer_*` script, a `CATEGORY_MAP` is defined to standardize the mapping of layout elements across different products.
117
- This ensures uniform evaluation by mapping the extracted document layout classes to the standardized layout categories for comparative analysis and evaluation purposes.
118
 
119
- Be sure to modify the `CATEGORY_MAP` in the inference scripts according to the document layout categories you are working with for accurate results.
120
 
121
- Below is an example of a [CATEGORY_MAP](https://github.com/UpstageAI/document-parse-benchmark-private/blob/776d9212fedb4a07671dcba666f400faf3faad4c/scripts/infer_llamaparse.py#L13) used inside LlamaParse inference script:
122
- ```
 
 
123
  CATEGORY_MAP = {
124
  "text": "paragraph",
125
  "heading": "heading1",
@@ -127,4 +275,29 @@ CATEGORY_MAP = {
127
  }
128
  ```
129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
 
 
 
1
  # Document Parsing Models - Inference Guide
2
+
3
  ## Overview
 
 
4
 
5
+ The scripts in this folder allow users to extract structured data from unstructured documents using different document parsing services and libraries. Each service follows a standard installation procedure and provides an `infer_*` script to perform inference on PDF/Image samples.
 
6
 
7
+ You can choose from document parsing products such as **Upstage DP**, **AWS Textract**, **Google Document AI**, **Microsoft Azure Form Recognizer**, **LlamaParse**, or **Unstructured**. Most of these services require an API key for access. Make sure to follow specific setup instructions for each product to properly configure the environment.
 
8
 
9
+ Each service generates a JSON output file in a consistent format with `time_sec` field for performance measurement.
10
 
11
+ ---
12
 
13
+ ## Quick Start
14
+
15
+ **Run a single inference script:**
16
+ ```bash
17
+ python scripts/infer_upstage.py \
18
+ --data-path <path to dataset> \
19
+ --save-path <output.json> \
20
+ [--concurrent 4] [--sampling-rate 0.5] [--request-timeout 600]
21
+ ```
22
+
23
+ ---
24
+
25
+ ## Common CLI Arguments
26
+
27
+ All `infer_*` scripts share these arguments:
28
 
29
+ | Argument | Description | Default |
30
+ |----------|-------------|---------|
31
+ | `--data-path` | Path to documents directory | Required |
32
+ | `--save-path` | Output JSON file path | Required |
33
+ | `--input-formats` | File extensions to process | `.pdf .jpg .jpeg .png .bmp .tiff .heic` |
34
+ | `--concurrent` | Enable async mode with N concurrent requests | None (sync mode) |
35
+ | `--sampling-rate` | Fraction of files to process (0.0-1.0) | 1.0 |
36
+ | `--request-timeout` | API timeout in seconds | 600 |
37
+ | `--random-seed` | Random seed for reproducible sampling | None (random) |
38
+
39
+ ---
40
+
41
+ ## Common Features
42
+
43
+ All inference scripts share the following features:
44
+
45
+ - **Time Measurement**: Automatically measures API latency and stores `time_sec` in each result
46
+ - **Interim Results**: Saves individual API results to avoid redundant API calls on re-runs
47
+ - **Error Handling**: Continues execution even if some files fail
48
+ - **Progress Tracking**: Shows progress and completion status for each document
49
+ - **Cost Optimization**: Skips already processed files to avoid unnecessary API costs
50
+ - **Concurrency**: Optional async mode with semaphore-based rate limiting
51
+ - **Sampling**: Optional random sampling with reproducible seeds
52
+
53
+ ### How Interim Results Work
54
+
55
+ Each inference script creates an interim directory (named after the output file) where individual API results are stored:
56
 
 
57
  ```
58
+ predictions/
59
+ ├── upstage_infer.json # Final merged results
60
+ └── upstage_infer/ # Interim directory
61
+ ├── document1.pdf.json
62
+ ├── document2.pdf.json
63
+ └── document3.pdf.json
64
  ```
65
 
66
+ Benefits:
67
+ 1. **Crash Recovery**: If the script crashes, already processed files are preserved
68
+ 2. **Incremental Processing**: Re-running the script only processes new files
69
+ 3. **Cost Savings**: Avoids redundant API calls for successful results
70
 
71
+ ### Sampling and Reproducible Results
72
+
73
+ All inference scripts support random sampling of input files using the `--sampling-rate` parameter (0.0-1.0). For reproducible results across multiple runs, use the `--random-seed` parameter:
74
+
75
+ ```bash
76
+ # Sample 50% of files with reproducible selection
77
+ python scripts/infer_upstage.py \
78
+ --data-path ./documents \
79
+ --save-path results.json \
80
+ --sampling-rate 0.5 \
81
+ --random-seed 42
82
  ```
 
 
 
 
 
 
 
83
 
84
+ **Benefits:**
85
+ - **Reproducible Experiments**: Same seed + same sampling rate = identical file selection
86
+ - **Performance Testing**: Compare different services on the exact same documents
87
+ - **Cost Control**: Test on smaller datasets while maintaining representative samples
88
+
89
+ **Note**: Without `--random-seed`, sampling will be different each run (standard random behavior).
90
+
91
+ ---
92
+
93
+ ## Upstage
94
+
95
+ Follow the [official Upstage DP Documentation](https://developers.upstage.ai/docs/apis/document-parse) to set up Upstage for Document Parsing.
96
 
97
+ ### Environment Variables
98
+ ```bash
99
+ export UPSTAGE_API_KEY="your-api-key"
100
+ export UPSTAGE_ENDPOINT="https://api.upstage.ai/v1/document-ai/document-parse"
101
  ```
102
+
103
+ ### Inference
104
+ ```bash
105
+ python scripts/infer_upstage.py \
106
+ --data-path <path to dataset> \
107
+ --save-path <output.json> \
108
+ [--model-name document-parse-nightly] \
109
+ [--mode standard|enhanced] \
110
+ [--output-formats text html markdown]
111
  ```
112
 
113
+ **Service-specific arguments:**
114
+ - `--model-name`: Model version (default: `document-parse-nightly`)
115
+ - `--mode`: Parsing mode - `standard` or `enhanced`
116
+ - `--output-formats`: Output formats to request
117
 
118
+ ---
119
+
120
+ ## AWS Textract
121
+
122
+ ### Installation
123
+ ```bash
124
+ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
125
+ unzip awscliv2.zip
126
+ sudo ./aws/install
127
+ aws configure
128
+ pip install boto3
129
  ```
130
 
131
+ Refer to the [AWS Textract Documentation](https://docs.aws.amazon.com/en_us/textract/latest/dg/getting-started.html) for detailed instructions.
132
 
133
+ ### Environment Variables
134
+ ```bash
135
+ export AWS_ACCESS_KEY_ID="your-access-key"
136
+ export AWS_SECRET_ACCESS_KEY="your-secret-key"
137
+ export AWS_REGION="your-region"
138
+ export AWS_S3_BUCKET_NAME="your-bucket" # Required for PDF processing
139
+ ```
140
 
141
+ ### Inference
142
+ ```bash
143
+ python scripts/infer_aws.py \
144
+ --data-path <path to dataset> \
145
+ --save-path <output.json>
146
  ```
147
+
148
+ **Note:** PDFs use async Textract jobs (S3 upload + polling); images use direct analysis.
149
+
150
+ ---
151
+
152
+ ## Google Document AI
153
+
154
+ ### Installation
155
+ ```bash
156
+ apt-get install apt-transport-https ca-certificates gnupg curl
157
+ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
158
+ echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
159
+ apt-get update && apt-get install google-cloud-cli
160
+ gcloud init
161
+ pip install google-cloud-documentai
162
  ```
163
 
164
+ More information in the [Google Document AI Documentation](https://console.cloud.google.com/ai/document-ai).
 
165
 
166
+ ### Environment Variables
167
+ ```bash
168
+ export GOOGLE_PROJECT_ID="your-project-id"
169
+ export GOOGLE_PROCESSOR_ID="your-processor-id"
170
+ export GOOGLE_LOCATION="us"
171
+ export GOOGLE_ENDPOINT="us-documentai.googleapis.com"
172
+ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"
173
+ ```
174
 
175
+ ### Inference
176
+ ```bash
177
+ python scripts/infer_google.py \
178
+ --data-path <path to dataset> \
179
+ --save-path <output.json>
180
  ```
181
+
182
+ ---
183
+
184
+ ## Microsoft Azure Document Intelligence
185
+
186
+ ### Installation
187
+ ```bash
188
+ pip install azure-ai-formrecognizer==3.3.0
189
  ```
190
 
191
+ See the [Microsoft Azure Form Recognizer Documentation](https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api) for additional details.
192
+
193
+ ### Environment Variables
194
+ ```bash
195
+ export MICROSOFT_API_KEY="your-api-key"
196
+ export MICROSOFT_ENDPOINT="https://your-resource.cognitiveservices.azure.com/"
197
  ```
198
+
199
+ ### Inference
200
+ ```bash
201
+ python scripts/infer_microsoft.py \
202
+ --data-path <path to dataset> \
203
+ --save-path <output.json>
204
  ```
 
205
 
206
+ ---
207
+
208
+ ## LlamaParse
209
+
210
+ Refer to the [official LlamaParse Documentation](https://docs.cloud.llamaindex.ai/category/API/parsing) to set up LlamaParse.
211
 
212
+ ### Environment Variables
213
+ ```bash
214
+ export LLAMAPARSE_API_KEY="your-api-key"
215
+ export LLAMAPARSE_POST_URL="https://api.cloud.llamaindex.ai/api/v1/parsing/upload"
216
+ export LLAMAPARSE_GET_URL="https://api.cloud.llamaindex.ai/api/v1/parsing/job"
217
  ```
218
+
219
+ ### Inference
220
+ ```bash
221
+ python scripts/infer_llamaparse.py \
222
+ --data-path <path to dataset> \
223
+ --save-path <output.json> \
224
+ [--mode cost-effective|agentic|agentic-plus]
225
  ```
226
 
227
+ **Service-specific arguments:**
228
+ - `--mode`: Parsing mode
229
+ - `cost-effective`: Fast, standard documents (default)
230
+ - `agentic`: Balanced quality/cost
231
+ - `agentic-plus`: Highest quality
232
+
233
+ **Note:** Time measurement includes polling time for async API calls.
234
+
235
+ ---
236
+
237
  ## Unstructured
238
 
239
+ ### Installation
240
+ ```bash
241
+ pip install "unstructured[all-docs]"
242
+ pip install poppler-utils
243
+
244
+ apt install tesseract-ocr libtesseract-dev
245
+ apt install tesseract-ocr-[lang] # Use appropriate language code
246
  ```
 
247
 
248
+ Detailed installation instructions at [Unstructured Documentation](https://unstructured-io.github.io/unstructured/installing.html). Use [Tesseract Language Codes](https://tesseract-ocr.github.io/tessdoc/Data-Files-in-different-versions.html) for OCR support in different languages.
249
 
250
+ ### Environment Variables
251
+ ```bash
252
+ export UNSTRUCTURED_API_KEY="your-api-key"
253
+ export UNSTRUCTURED_URL="https://api.unstructured.io/general/v0/general"
254
  ```
255
+
256
+ ### Inference
257
+ ```bash
258
+ python scripts/infer_unstructured.py \
259
+ --data-path <path to dataset> \
260
+ --save-path <output.json>
261
  ```
262
 
263
+ ---
 
 
264
 
265
+ ## Category Mapping
266
 
267
+ Within each `infer_*` script, a `CATEGORY_MAP` is defined to standardize the mapping of layout elements across different products. This ensures uniform evaluation by mapping the extracted document layout classes to standardized categories.
268
+
269
+ Example from LlamaParse:
270
+ ```python
271
  CATEGORY_MAP = {
272
  "text": "paragraph",
273
  "heading": "heading1",
 
275
  }
276
  ```
277
 
278
+ Modify the `CATEGORY_MAP` in inference scripts according to your document layout categories for accurate results.
279
+
280
+ ---
281
+
282
+ ## Utils Module
283
+
284
+ The `utils.py` module provides shared functionality:
285
+
286
+ - `read_file_paths()` - Find files with supported formats
287
+ - `validate_json_save_path()` - Validate output file path
288
+ - `load_json_file()` - Safely load existing JSON results
289
+ - `get_interim_dir_path()` - Get interim directory path
290
+ - `save_interim_result()` - Save individual API result
291
+ - `load_interim_result()` - Load existing interim result
292
+ - `collect_all_interim_results()` - Merge all interim results
293
+
294
+ ---
295
+
296
+ ## Base Classes (for developers)
297
+
298
+ The `base.py` module provides inheritance hierarchy:
299
+
300
+ - **`BaseInference`**: Core class with sync/async orchestration, interim result handling, performance metrics
301
+ - **`HttpClientInference`**: For HTTP-based APIs (Upstage, LlamaParse) - manages `httpx.AsyncClient`
302
 
303
+ Use `create_argument_parser()` from `base.py` to get standard CLI arguments when creating new inference scripts.
scripts/base.py ADDED
@@ -0,0 +1,899 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Base class and utilities for inference scripts.
3
+
4
+ This module provides:
5
+ - BaseInference: A base class that captures common inference patterns
6
+ - HttpClientInference: Base class for HTTP-based APIs (uses httpx)
7
+ - print_performance_metrics: Utility for printing performance metrics
8
+ - process_files_concurrently: Utility for concurrent file processing
9
+ - create_argument_parser: Common CLI argument parser
10
+ """
11
+ import os
12
+ import json
13
+ import asyncio
14
+ import time
15
+ import random
16
+ import argparse
17
+ from abc import ABC, abstractmethod
18
+ from pathlib import Path
19
+ from typing import List, Tuple, Callable, Any, Optional, Dict
20
+ from enum import Enum
21
+
22
+ from utils import (
23
+ load_json_file,
24
+ read_file_paths,
25
+ validate_json_save_path,
26
+ get_interim_dir_path,
27
+ save_interim_result,
28
+ load_interim_result,
29
+ collect_all_interim_results
30
+ )
31
+
32
+ from doc_grouping import (
33
+ group_pages_to_documents,
34
+ parse_ext_mapping,
35
+ is_multi_page_dataset,
36
+ )
37
+
38
+
39
+ class ErrorType(Enum):
40
+ """Error categories for better error tracking."""
41
+ TIMEOUT = "timeout"
42
+ API_ERROR = "api_error"
43
+ NETWORK_ERROR = "network_error"
44
+ VALIDATION_ERROR = "validation_error"
45
+ UNKNOWN_ERROR = "unknown_error"
46
+
47
+
48
+ def create_argument_parser(description: str = "Document inference script") -> argparse.ArgumentParser:
49
+ """Create a common argument parser with standard arguments.
50
+
51
+ Args:
52
+ description: Description for the argument parser
53
+
54
+ Returns:
55
+ argparse.ArgumentParser with common arguments configured
56
+ """
57
+ parser = argparse.ArgumentParser(description=description)
58
+ parser.add_argument(
59
+ "--data-path",
60
+ type=str, default="", required=True,
61
+ help="Path containing the documents to process"
62
+ )
63
+ parser.add_argument(
64
+ "--save-path",
65
+ type=str, default="", required=True,
66
+ help="Path to save the results"
67
+ )
68
+ parser.add_argument(
69
+ "--input-formats",
70
+ type=str, nargs='+',
71
+ default=[".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"],
72
+ help="Supported input file formats"
73
+ )
74
+ parser.add_argument(
75
+ "--concurrent",
76
+ type=int, default=None,
77
+ help="Number of concurrent API requests (enables concurrent mode if specified)"
78
+ )
79
+ parser.add_argument(
80
+ "--sampling-rate",
81
+ type=float, default=1.0,
82
+ help="Fraction of files to process (0.0-1.0, default 1.0 = all files)"
83
+ )
84
+ parser.add_argument(
85
+ "--request-timeout",
86
+ type=float, default=600,
87
+ help="Timeout in seconds for API requests (default 600)"
88
+ )
89
+ parser.add_argument(
90
+ "--random-seed",
91
+ type=int, default=None,
92
+ help="Random seed for reproducible sampling (default None = random)"
93
+ )
94
+ parser.add_argument(
95
+ "--model",
96
+ type=str, default=None,
97
+ help="Model name to use for inference (default depends on provider)"
98
+ )
99
+ parser.add_argument(
100
+ "--mode",
101
+ type=str, default=None,
102
+ help="Inference mode (e.g., 'standard', 'enhanced', 'agentic'). None if not applicable."
103
+ )
104
+ parser.add_argument(
105
+ "--group-by-document",
106
+ action=argparse.BooleanOptionalAction,
107
+ default=False,
108
+ help="Group per-page results into document-level entries (default: False)"
109
+ )
110
+ parser.add_argument(
111
+ "--file-ext-mapping",
112
+ type=str,
113
+ default=None,
114
+ help="File extension mapping for document grouping, e.g., 'jpg:pdf' or 'jpg->pdf,png->pdf'"
115
+ )
116
+ return parser
117
+
118
+
119
+ def parse_args_with_extra(parser: argparse.ArgumentParser) -> argparse.Namespace:
120
+ """Parse arguments, gracefully ignoring unrecognized ones.
121
+
122
+ This allows extra CLI arguments (e.g. --dpi, --jpeg-quality) to be passed
123
+ down from run_all.py / infer_all.py without breaking scripts that don't
124
+ understand them. Unrecognized arguments are logged to stderr and silently
125
+ discarded.
126
+
127
+ Args:
128
+ parser: ArgumentParser to use for parsing
129
+
130
+ Returns:
131
+ argparse.Namespace with recognized arguments
132
+ """
133
+ args, unknown = parser.parse_known_args()
134
+ if unknown:
135
+ import sys as _sys
136
+ print(f"[INFO] Ignoring unrecognized arguments: {unknown}", file=_sys.stderr)
137
+ return args
138
+
139
+
140
+ def print_performance_metrics(
141
+ sample_latencies: List[float],
142
+ total_elapsed_time: float,
143
+ concurrent_limit: Optional[int] = None,
144
+ num_total: Optional[int] = None,
145
+ num_errors: int = 0
146
+ ):
147
+ """Print performance metrics for concurrent processing.
148
+
149
+ Args:
150
+ sample_latencies: List of latencies for each successful sample
151
+ total_elapsed_time: Total time elapsed for all processing
152
+ concurrent_limit: Optional concurrent limit (for display)
153
+ num_total: Optional total number of samples
154
+ num_errors: Number of failed samples
155
+ """
156
+ num_successful = len(sample_latencies)
157
+ total_samples = num_total if num_total is not None else (num_successful + num_errors)
158
+ success_rate = (num_successful / total_samples * 100) if total_samples > 0 else 0
159
+
160
+ print("="*60)
161
+ print("PERFORMANCE METRICS")
162
+ if concurrent_limit is not None:
163
+ print(f"Concurrent Limit: {concurrent_limit}")
164
+
165
+ print(f"\nSuccess Rate: {success_rate:.2f}% ({num_successful}/{total_samples})")
166
+
167
+ if num_successful > 0:
168
+ # Latency metrics (sec/sample)
169
+ avg_latency = sum(sample_latencies) / num_successful
170
+ min_latency = min(sample_latencies)
171
+ max_latency = max(sample_latencies)
172
+
173
+ print(f"\nLatency (sec/sample):")
174
+ print(f" - Average: {avg_latency:.2f} sec/sample")
175
+ print(f" - Min: {min_latency:.2f} sec/sample")
176
+ print(f" - Max: {max_latency:.2f} sec/sample")
177
+
178
+ # Throughput metrics (sample/min)
179
+ throughput_per_min = (num_successful / total_elapsed_time) * 60
180
+
181
+ print(f"\nThroughput:")
182
+ print(f" - {throughput_per_min:.2f} samples/min")
183
+
184
+ print(f"\nTotal Processing Time: {total_elapsed_time:.2f} seconds")
185
+ print("="*60)
186
+
187
+
188
+ def categorize_error(error: Exception) -> ErrorType:
189
+ """Categorize an exception into error types.
190
+
191
+ Args:
192
+ error: Exception to categorize
193
+
194
+ Returns:
195
+ ErrorType enum value
196
+ """
197
+ error_str = str(error).lower()
198
+ error_type = type(error).__name__
199
+
200
+ if isinstance(error, asyncio.TimeoutError) or "timeout" in error_str:
201
+ return ErrorType.TIMEOUT
202
+ elif isinstance(error, (ConnectionError, OSError)) or "connection" in error_str or "network" in error_str:
203
+ return ErrorType.NETWORK_ERROR
204
+ elif "api" in error_str or "http" in error_str or "status" in error_str:
205
+ return ErrorType.API_ERROR
206
+ elif "validation" in error_str or "invalid" in error_str:
207
+ return ErrorType.VALIDATION_ERROR
208
+ else:
209
+ return ErrorType.UNKNOWN_ERROR
210
+
211
+
212
+ async def process_files_concurrently(
213
+ paths: List,
214
+ process_single_file_fn: Callable,
215
+ concurrent_limit: int = 4,
216
+ processed_data: Optional[dict] = None,
217
+ *args, **kwargs
218
+ ) -> Tuple[dict, List[str], List[float], dict]:
219
+ """Process multiple files concurrently with semaphore-based rate limiting.
220
+
221
+ Args:
222
+ paths: List of file paths to process
223
+ process_single_file_fn: Async function that processes a single file
224
+ Should accept (filepath, file_index, total_files, *args, **kwargs) and return
225
+ (filename, result, latency) or (filename, None, 0) on error
226
+ concurrent_limit: Maximum number of concurrent operations
227
+ processed_data: Optional dict of already processed data (to skip)
228
+ *args, **kwargs: Additional arguments to pass to process_single_file_fn
229
+
230
+ Returns:
231
+ Tuple of (result_dict, error_files, sample_latencies, error_details)
232
+ error_details: dict mapping filename to (error_type, error_message)
233
+ """
234
+ if processed_data is None:
235
+ processed_data = {}
236
+
237
+ error_files = []
238
+ error_details = {} # filename -> (ErrorType, error_message)
239
+ sample_latencies = []
240
+ result_dict = {}
241
+
242
+ # Create tasks for all files
243
+ tasks = []
244
+ for idx, filepath in enumerate(paths, 1):
245
+ task = process_single_file_fn(filepath, idx, len(paths), *args, **kwargs)
246
+ tasks.append(task)
247
+
248
+ # Process all files concurrently (with semaphore limiting concurrency)
249
+ # Handle KeyboardInterrupt gracefully
250
+ interrupted = False
251
+ try:
252
+ results = await asyncio.gather(*tasks, return_exceptions=True)
253
+ except KeyboardInterrupt:
254
+ interrupted = True
255
+ print("\n⚠️ KeyboardInterrupt detected! Collecting completed results...")
256
+
257
+ # Cancel remaining tasks
258
+ for task in tasks:
259
+ if not task.done():
260
+ task.cancel()
261
+
262
+ # Wait a moment for tasks to finish cancelling
263
+ await asyncio.sleep(0.1)
264
+
265
+ # Collect completed results
266
+ results = []
267
+ for task in tasks:
268
+ if task.done() and not task.cancelled():
269
+ try:
270
+ results.append(task.result())
271
+ except Exception as e:
272
+ results.append(e)
273
+
274
+ print(f"✓ Collected {len([r for r in results if not isinstance(r, Exception)])} completed results out of {len(paths)} total files")
275
+
276
+ # Collect results
277
+ for i, result in enumerate(results):
278
+ if isinstance(result, Exception):
279
+ error_type = categorize_error(result)
280
+ error_msg = str(result)
281
+ # Try to get filename from the corresponding filepath
282
+ if i < len(paths):
283
+ filename = paths[i].name
284
+ error_files.append(filename)
285
+ error_details[filename] = (error_type, error_msg)
286
+ print(f"Error in task ({error_type.value}): {error_msg}")
287
+ continue
288
+
289
+ filename, result_data, latency = result
290
+
291
+ if result_data is not None:
292
+ result_dict[filename] = result_data
293
+ if latency > 0:
294
+ sample_latencies.append(latency)
295
+ elif latency == 0 and filename not in processed_data:
296
+ error_files.append(filename)
297
+ # If we get here, it means process_single_file returned (filename, None, 0)
298
+ # but didn't raise an exception, so it's a skipped file (already processed)
299
+
300
+ if interrupted:
301
+ print("⚠️ Processing was interrupted. Partial results collected.")
302
+
303
+ return result_dict, error_files, sample_latencies, error_details
304
+
305
+
306
+ class BaseInference(ABC):
307
+ """Base class for all inference implementations.
308
+
309
+ This class provides common functionality for:
310
+ - Initialization (save_path, interim_dir, processed_data)
311
+ - Concurrent mode setup (semaphore)
312
+ - Result collection and saving (_collect_and_save_results)
313
+ - Async inference orchestration (infer_async)
314
+ - Sync/async mode dispatching (infer)
315
+
316
+ Subclasses must implement:
317
+ - post_process(): Process raw API results into standard format
318
+ - _call_api_async(): Make async API call for a file
319
+ - _call_api_sync(): Make sync API call for a file
320
+ """
321
+
322
+ # Default input formats - can be overridden by subclasses
323
+ DEFAULT_INPUT_FORMATS = [".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"]
324
+
325
+ def __init__(
326
+ self,
327
+ save_path,
328
+ input_formats=None,
329
+ concurrent_limit=None,
330
+ sampling_rate=1.0,
331
+ request_timeout=600,
332
+ random_seed=None,
333
+ group_by_document=False,
334
+ file_ext_mapping=None
335
+ ):
336
+ """Initialize the base inference class
337
+
338
+ Args:
339
+ save_path (str): the json path to save the results
340
+ input_formats (list, optional): the supported file formats.
341
+ concurrent_limit (int, optional): maximum number of concurrent API requests (enables concurrent mode)
342
+ sampling_rate (float, optional): fraction of files to process (0.0-1.0, default 1.0 = all files)
343
+ request_timeout (float, optional): timeout in seconds for API requests (default 600)
344
+ random_seed (int, optional): random seed for reproducible sampling (default None = random)
345
+ group_by_document (bool, optional): group per-page results into document-level entries (default False)
346
+ file_ext_mapping (str or dict, optional): file extension mapping for document grouping
347
+ """
348
+ if input_formats is None:
349
+ input_formats = self.DEFAULT_INPUT_FORMATS
350
+
351
+ self.formats = input_formats
352
+ self.concurrent_limit = concurrent_limit
353
+ self.sampling_rate = max(0.0, min(1.0, sampling_rate)) # Clamp between 0 and 1
354
+ self.request_timeout = request_timeout
355
+ self.random_seed = random_seed
356
+
357
+ # Document grouping settings
358
+ self.group_by_document = group_by_document
359
+ if isinstance(file_ext_mapping, str):
360
+ self.file_ext_mapping = parse_ext_mapping(file_ext_mapping) if file_ext_mapping else {}
361
+ else:
362
+ self.file_ext_mapping = file_ext_mapping or {}
363
+
364
+ # Setup save_path and interim directory (used by both modes)
365
+ validate_json_save_path(save_path)
366
+ self.save_path = save_path
367
+ self.interim_dir = get_interim_dir_path(save_path)
368
+ os.makedirs(self.interim_dir, exist_ok=True)
369
+ self.processed_data = load_json_file(save_path)
370
+
371
+ # Concurrent mode setup
372
+ if concurrent_limit is not None:
373
+ self.semaphore = asyncio.Semaphore(concurrent_limit)
374
+
375
+ @abstractmethod
376
+ def post_process(self, data: Dict) -> Dict:
377
+ """Post-process the raw API response to match the standard format.
378
+
379
+ This method must be implemented by subclasses to convert API-specific
380
+ response formats into the standard format.
381
+
382
+ Args:
383
+ data (dict): raw API response data, keyed by filename
384
+
385
+ Returns:
386
+ dict: processed data in standard format, keyed by filename
387
+ """
388
+ pass
389
+
390
+ def _merge_processed_data(self, processed_dict: Dict) -> Dict:
391
+ """Merge previously processed data into the result dict.
392
+
393
+ This is a common operation at the end of post_process().
394
+ Subclasses should call this at the end of their post_process implementation.
395
+
396
+ Args:
397
+ processed_dict: The dict of newly processed results
398
+
399
+ Returns:
400
+ The merged dict including previously processed data
401
+ """
402
+ for key in self.processed_data:
403
+ if key not in processed_dict:
404
+ processed_dict[key] = self.processed_data[key]
405
+ return processed_dict
406
+
407
+ @abstractmethod
408
+ async def _call_api_async(self, filepath, *args, **kwargs):
409
+ """Make the actual async API call for a file.
410
+
411
+ This method must be implemented by subclasses to perform the actual API call.
412
+ It should NOT handle interim result checking or saving - that's done by the base class.
413
+
414
+ Args:
415
+ filepath: Path object to the file
416
+ *args, **kwargs: Additional arguments (e.g., client for HTTP requests)
417
+
418
+ Returns:
419
+ The raw API response data (will be wrapped by base class)
420
+
421
+ Raises:
422
+ Exception: If API call fails
423
+ """
424
+ pass
425
+
426
+ async def process_single_file(self, filepath, file_index, total_files, *args, **kwargs):
427
+ """Process a single file asynchronously (for concurrent mode).
428
+
429
+ This wrapper method handles:
430
+ - Checking if already processed
431
+ - Checking interim results
432
+ - Semaphore (if concurrent_limit is set)
433
+ - Timing
434
+ - File size tracking
435
+ - Saving interim results
436
+ - Error handling and categorization
437
+
438
+ Subclasses only need to implement _call_api_async().
439
+
440
+ Args:
441
+ filepath: Path object to the file
442
+ file_index: Current file index (for logging)
443
+ total_files: Total number of files
444
+ *args, **kwargs: Additional arguments passed to _call_api_async
445
+
446
+ Returns:
447
+ tuple: (filename, result_data, latency) or (filename, None, 0) on error
448
+ """
449
+ filename = filepath.name
450
+ file_size_mb = filepath.stat().st_size / (1024 * 1024)
451
+ print(f"({file_index}/{total_files}) Processing {filepath} ({file_size_mb:.2f} MB)")
452
+
453
+ # Check if already processed (in memory)
454
+ if filename in self.processed_data.keys():
455
+ print(f"'{filename}' is already in the loaded dictionary. Skipping this sample")
456
+ return (filename, None, 0)
457
+
458
+ # Check if interim result exists (on disk)
459
+ existing_result = load_interim_result(self.interim_dir, filename)
460
+ if existing_result is not None:
461
+ print(f"'{filename}' interim result already exists. Skipping API call to save costs.")
462
+ return (filename, None, 0)
463
+
464
+ try:
465
+ # Use semaphore if concurrent_limit is set (for file-level concurrency control)
466
+ # Note: Some subclasses may use semaphore inside _call_api_async for page-level control
467
+ if self.concurrent_limit is not None and hasattr(self, 'semaphore'):
468
+ async with self.semaphore:
469
+ # Start timing AFTER acquiring semaphore (don't include wait time)
470
+ sample_start_time = time.time()
471
+ result_data = await asyncio.wait_for(
472
+ self._call_api_async(filepath, *args, **kwargs),
473
+ timeout=self.request_timeout
474
+ )
475
+ else:
476
+ # Start timing right before API calls
477
+ sample_start_time = time.time()
478
+ result_data = await asyncio.wait_for(
479
+ self._call_api_async(filepath, *args, **kwargs),
480
+ timeout=self.request_timeout
481
+ )
482
+
483
+ sample_latency = time.time() - sample_start_time
484
+
485
+ # Save interim result with file size
486
+ result_with_time = {
487
+ "data": result_data,
488
+ "time_sec": sample_latency,
489
+ "file_size_mb": round(file_size_mb, 2)
490
+ }
491
+ save_interim_result(self.interim_dir, filename, result_with_time)
492
+ pct = file_index / total_files * 100
493
+ print(f"✓ ({file_index}/{total_files}, {pct:.1f}%) Saved '{filename}' (took {sample_latency:.2f}s)")
494
+
495
+ # Return result_with_time (not result_data) to preserve time_sec/file_size_mb
496
+ return (filename, result_with_time, sample_latency)
497
+
498
+ except asyncio.TimeoutError:
499
+ error_type = ErrorType.TIMEOUT
500
+ error_msg = f"Request timeout after {self.request_timeout}s"
501
+ print(f"✗ {filename} - {error_type.value}: {error_msg}")
502
+ # Raise exception so it can be caught and categorized in process_files_concurrently
503
+ raise asyncio.TimeoutError(error_msg)
504
+ except Exception as e:
505
+ error_type = categorize_error(e)
506
+ error_msg = str(e)
507
+ print(f"✗ {filename} - {error_type.value}: {error_msg}")
508
+ # Re-raise so it can be caught and categorized in process_files_concurrently
509
+ raise
510
+
511
+ @abstractmethod
512
+ def _call_api_sync(self, filepath, *args, **kwargs):
513
+ """Make the actual sync API call for a file.
514
+
515
+ This method must be implemented by subclasses to perform the actual API call.
516
+ It should NOT handle interim result checking or saving - that's done by the base class.
517
+
518
+ Args:
519
+ filepath: Path object to the file
520
+ *args, **kwargs: Additional arguments
521
+
522
+ Returns:
523
+ The raw API response data (will be wrapped by base class)
524
+
525
+ Raises:
526
+ Exception: If API call fails
527
+ """
528
+ pass
529
+
530
+ def process_file_sequential(self, filepath, file_index, total_files, *args, **kwargs):
531
+ """Process a single file sequentially (for sync mode).
532
+
533
+ This wrapper method handles:
534
+ - Checking if already processed
535
+ - Checking interim results
536
+ - Timing
537
+ - File size tracking
538
+ - Saving interim results
539
+ - Error handling and categorization
540
+
541
+ Subclasses only need to implement _call_api_sync().
542
+
543
+ Args:
544
+ filepath: Path object to the file
545
+ file_index: Current file index (for logging)
546
+ total_files: Total number of files
547
+ *args, **kwargs: Additional arguments passed to _call_api_sync
548
+
549
+ Returns:
550
+ tuple: (filename, result_data, latency) or (filename, None, 0) on error
551
+ """
552
+ filename = filepath.name
553
+ file_size_mb = filepath.stat().st_size / (1024 * 1024)
554
+ sample_start_time = time.time()
555
+
556
+ try:
557
+ # Make the actual API call (implemented by subclass)
558
+ # Note: For sync calls, timeout handling should be done in the subclass
559
+ # or we could use signal.alarm on Unix, but that's complex
560
+ result_data = self._call_api_sync(filepath, *args, **kwargs)
561
+
562
+ sample_latency = time.time() - sample_start_time
563
+
564
+ # Save interim result with file size
565
+ result_with_time = {
566
+ "data": result_data,
567
+ "time_sec": sample_latency,
568
+ "file_size_mb": round(file_size_mb, 2)
569
+ }
570
+ save_interim_result(self.interim_dir, filename, result_with_time)
571
+ pct = file_index / total_files * 100
572
+ print(f"✓ ({file_index}/{total_files}, {pct:.1f}%) Saved '{filename}' (took {sample_latency:.2f}s)")
573
+
574
+ # Return result_with_time (not result_data) to preserve time_sec/file_size_mb
575
+ return (filename, result_with_time, sample_latency)
576
+
577
+ except Exception as e:
578
+ error_type = categorize_error(e)
579
+ error_msg = str(e)
580
+ print(f"✗ {filename} - {error_type.value}: {error_msg}")
581
+ # Re-raise so it can be caught and categorized in _infer_sequential
582
+ raise
583
+
584
+ def _collect_and_save_results(self, raw_results, sample_latencies, total_elapsed_time, error_files, error_details=None):
585
+ """Common method to collect interim results, post-process, and save final results.
586
+
587
+ Used by both sync and async modes. This method:
588
+ 1. Collects all interim results from disk
589
+ 2. Merges with current run results
590
+ 3. Unwraps data from interim result format
591
+ 4. Post-processes results
592
+ 5. Preserves timing information
593
+ 6. Saves final results
594
+ 7. Prints performance metrics
595
+
596
+ Args:
597
+ raw_results (dict): Results from current run, keyed by filename
598
+ sample_latencies (list): List of latencies for successful samples
599
+ total_elapsed_time (float): Total time elapsed for processing
600
+ error_files (list): List of filenames that had errors
601
+ error_details (dict, optional): Dict mapping filename to (ErrorType, error_message)
602
+
603
+ Returns:
604
+ dict: Final processed results
605
+ """
606
+ if error_details is None:
607
+ error_details = {}
608
+
609
+ # Collect all interim results
610
+ print("Collecting all interim results...")
611
+ collected_results = collect_all_interim_results(self.interim_dir)
612
+
613
+ # Merge with raw_results (from current run)
614
+ for key, value in raw_results.items():
615
+ collected_results[key] = value
616
+
617
+ raw_results = collected_results
618
+
619
+ # Unwrap the data from interim results (extract 'data' field)
620
+ unwrapped_results = {}
621
+ for key, value in raw_results.items():
622
+ if isinstance(value, dict) and "data" in value:
623
+ unwrapped_results[key] = value["data"]
624
+ else:
625
+ unwrapped_results[key] = value
626
+
627
+ # Post-process results
628
+ final_results = self.post_process(unwrapped_results)
629
+
630
+ # Preserve time_sec and file_size_mb from raw_results to final_results
631
+ for key in final_results:
632
+ if key in raw_results:
633
+ raw_result = raw_results[key]
634
+ if isinstance(raw_result, dict):
635
+ if "time_sec" in raw_result and isinstance(final_results[key], dict):
636
+ final_results[key]["time_sec"] = raw_result["time_sec"]
637
+ if "file_size_mb" in raw_result and isinstance(final_results[key], dict):
638
+ final_results[key]["file_size_mb"] = raw_result["file_size_mb"]
639
+
640
+ # Apply document-level grouping if enabled
641
+ if self.group_by_document:
642
+ # Filter out metadata keys before grouping
643
+ data_keys = [k for k in final_results.keys() if not k.startswith("_")]
644
+ if data_keys and is_multi_page_dataset(data_keys):
645
+ print("Grouping per-page results into document-level entries...")
646
+
647
+ # Separate metadata and data
648
+ metadata_keys = [k for k in final_results.keys() if k.startswith("_")]
649
+ metadata = {k: final_results[k] for k in metadata_keys}
650
+ page_data = {k: final_results[k] for k in data_keys}
651
+
652
+ # Group pages into documents
653
+ grouped_results = group_pages_to_documents(
654
+ page_data,
655
+ file_ext_mapping=self.file_ext_mapping,
656
+ elements_key="elements",
657
+ include_merged_tables=True,
658
+ )
659
+
660
+ # Restore metadata
661
+ grouped_results.update(metadata)
662
+ final_results = grouped_results
663
+
664
+ print(f"Grouped {len(data_keys)} pages into {len(grouped_results) - len(metadata_keys)} documents")
665
+
666
+ # Calculate performance metrics from ALL results (including interim results)
667
+ # Extract time_sec from all collected raw_results to properly calculate metrics
668
+ # This is the authoritative source since raw_results contains both interim AND current run
669
+ all_latencies = []
670
+ for key, value in raw_results.items():
671
+ if isinstance(value, dict) and "time_sec" in value:
672
+ all_latencies.append(value["time_sec"])
673
+
674
+ # Always prefer all_latencies since it's extracted from merged results (interim + current)
675
+ # This ensures metadata reflects ALL processed files, not just the current run
676
+ # Handles: full resume (all interim), partial resume, and fresh runs
677
+ if all_latencies:
678
+ sample_latencies = all_latencies
679
+
680
+ num_successful = len(sample_latencies)
681
+ num_total = num_successful + len(error_files)
682
+
683
+ # Estimate total_elapsed_time from latencies if current run processed nothing
684
+ # For concurrent processing: wall-clock time ≈ sum(latencies) / concurrent_limit
685
+ if total_elapsed_time < 1.0 and sample_latencies:
686
+ sum_latencies = sum(sample_latencies)
687
+ concurrent_limit = self.concurrent_limit or 1
688
+ total_elapsed_time = sum_latencies / concurrent_limit
689
+
690
+ # Add metadata for reproducible throughput calculation in evaluation
691
+ final_results["_metadata"] = {
692
+ "total_elapsed_time_sec": round(total_elapsed_time, 4),
693
+ "concurrent_limit": self.concurrent_limit,
694
+ "num_files": num_total,
695
+ "num_successful": num_successful,
696
+ "num_errors": len(error_files)
697
+ }
698
+
699
+ # Save final results
700
+ with open(self.save_path, "w", encoding="utf-8") as f:
701
+ json.dump(final_results, f, ensure_ascii=False, indent=4)
702
+
703
+ print_performance_metrics(
704
+ sample_latencies,
705
+ total_elapsed_time,
706
+ self.concurrent_limit if self.concurrent_limit is not None else None,
707
+ num_total,
708
+ len(error_files)
709
+ )
710
+
711
+ # Print error summary with categorization
712
+ if error_files:
713
+ print(f"\nErrors ({len(error_files)} files):")
714
+ error_by_type = {}
715
+ for error_file in error_files:
716
+ if error_file in error_details:
717
+ error_type, error_msg = error_details[error_file]
718
+ if error_type not in error_by_type:
719
+ error_by_type[error_type] = []
720
+ error_by_type[error_type].append((error_file, error_msg))
721
+ else:
722
+ if ErrorType.UNKNOWN_ERROR not in error_by_type:
723
+ error_by_type[ErrorType.UNKNOWN_ERROR] = []
724
+ error_by_type[ErrorType.UNKNOWN_ERROR].append((error_file, "Unknown error"))
725
+
726
+ for error_type, errors in error_by_type.items():
727
+ print(f" {error_type.value.upper()} ({len(errors)} files):")
728
+ for error_file, error_msg in errors[:5]: # Show first 5
729
+ print(f" - {error_file}: {error_msg}")
730
+ if len(errors) > 5:
731
+ print(f" ... and {len(errors) - 5} more")
732
+
733
+ print("Finished processing all documents")
734
+ print("Results saved to: {}".format(self.save_path))
735
+ print("Interim results saved to: {}".format(self.interim_dir))
736
+ print("Number of errors: {}".format(len(error_files)))
737
+ print("Total processed files: {}".format(len(final_results)))
738
+
739
+ return final_results
740
+
741
+ async def infer_async(self, file_path, *args, **kwargs):
742
+ """Infer the layout of documents with concurrent processing.
743
+
744
+ This method orchestrates concurrent file processing using process_files_concurrently.
745
+ It can be overridden by subclasses if they need custom async behavior.
746
+
747
+ Args:
748
+ file_path (str): the path to the file or directory containing the documents to process
749
+ *args, **kwargs: Additional arguments to pass to process_single_file
750
+
751
+ Returns:
752
+ dict: Final processed results
753
+ """
754
+ paths = read_file_paths(file_path, supported_formats=self.formats)
755
+
756
+ # Apply sampling rate if less than 1.0
757
+ if self.sampling_rate < 1.0 and len(paths) > 0:
758
+ if self.random_seed is not None:
759
+ random.seed(self.random_seed)
760
+ sample_size = max(1, int(len(paths) * self.sampling_rate))
761
+ paths = random.sample(paths, sample_size)
762
+ print(f"Sampling {self.sampling_rate * 100:.1f}% of files: {len(paths)} out of {len(read_file_paths(file_path, supported_formats=self.formats))} total files")
763
+
764
+ error_files = []
765
+ sample_latencies = []
766
+ total_start_time = time.time()
767
+
768
+ # Process files concurrently
769
+ result_dict, error_files, sample_latencies, error_details = await process_files_concurrently(
770
+ paths,
771
+ self.process_single_file,
772
+ self.concurrent_limit,
773
+ self.processed_data,
774
+ *args,
775
+ **kwargs
776
+ )
777
+
778
+ # Collect, post-process, and save results (same as sequential mode)
779
+ total_elapsed_time = time.time() - total_start_time
780
+ final_results = self._collect_and_save_results(
781
+ result_dict, sample_latencies, total_elapsed_time, error_files, error_details
782
+ )
783
+
784
+ return final_results
785
+
786
+ def infer(self, file_path, *args, **kwargs):
787
+ """Infer the layout of the documents in the given file path.
788
+
789
+ This method dispatches to async mode if concurrent_limit is set,
790
+ otherwise runs sequential processing.
791
+
792
+ Args:
793
+ file_path (str): the path to the file or directory containing the documents to process
794
+ *args, **kwargs: Additional arguments (passed to infer_async or process_file_sequential)
795
+
796
+ Returns:
797
+ dict: Final processed results (or None for sequential mode without return)
798
+ """
799
+ # Use concurrent mode if concurrent_limit is set
800
+ if self.concurrent_limit is not None:
801
+ return asyncio.run(self.infer_async(file_path, *args, **kwargs))
802
+
803
+ # Sequential mode - delegate to subclass implementation
804
+ return self._infer_sequential(file_path, *args, **kwargs)
805
+
806
+ def _infer_sequential(self, file_path, *args, **kwargs):
807
+ """Internal method for sequential inference.
808
+
809
+ This template method can be overridden by subclasses for custom sequential behavior.
810
+ Default implementation processes files one by one using process_file_sequential.
811
+
812
+ Args:
813
+ file_path (str): the path to the file or directory containing the documents to process
814
+ *args, **kwargs: Additional arguments passed to process_file_sequential
815
+ """
816
+ paths = read_file_paths(file_path, supported_formats=self.formats)
817
+
818
+ # Apply sampling rate if less than 1.0
819
+ if self.sampling_rate < 1.0 and len(paths) > 0:
820
+ if self.random_seed is not None:
821
+ random.seed(self.random_seed)
822
+ sample_size = max(1, int(len(paths) * self.sampling_rate))
823
+ paths = random.sample(paths, sample_size)
824
+ print(f"Sampling {self.sampling_rate * 100:.1f}% of files: {len(paths)} out of {len(read_file_paths(file_path, supported_formats=self.formats))} total files")
825
+
826
+ error_files = []
827
+ error_details = {}
828
+ sample_latencies = []
829
+ total_start_time = time.time()
830
+
831
+ try:
832
+ for idx, filepath in enumerate(paths, 1):
833
+ filename = filepath.name
834
+
835
+ # Check if interim result already exists
836
+ existing_result = load_interim_result(self.interim_dir, filename)
837
+ if existing_result is not None:
838
+ print(f"'{filename}' interim result already exists. Skipping API call to save costs.")
839
+ continue
840
+
841
+ # Process the file
842
+ try:
843
+ filename_result, result_data, latency = self.process_file_sequential(
844
+ filepath, idx, len(paths), *args, **kwargs
845
+ )
846
+
847
+ if result_data is not None and latency > 0:
848
+ sample_latencies.append(latency)
849
+ elif latency == 0:
850
+ error_files.append(filename)
851
+ except KeyboardInterrupt:
852
+ raise # Re-raise to be caught by outer handler
853
+ except Exception as e:
854
+ error_type = categorize_error(e)
855
+ error_details[filename] = (error_type, str(e))
856
+ error_files.append(filename)
857
+ continue
858
+ except KeyboardInterrupt:
859
+ print("\n⚠️ KeyboardInterrupt detected! Saving partial results...")
860
+ print(f"✓ Processed {len(sample_latencies)} files before interruption")
861
+
862
+ # Collect, post-process, and save results (same as async mode)
863
+ total_elapsed_time = time.time() - total_start_time
864
+
865
+ # Pass empty dict - _collect_and_save_results will collect all interim results
866
+ # All results from this run have already been saved to interim directory
867
+ raw_results = {}
868
+
869
+ final_results = self._collect_and_save_results(
870
+ raw_results, sample_latencies, total_elapsed_time, error_files, error_details
871
+ )
872
+
873
+ return final_results
874
+
875
+
876
+ class HttpClientInference(BaseInference):
877
+ """Base class for HTTP-based API services (Upstage, LlamaParse).
878
+
879
+ This class provides:
880
+ - Automatic httpx.AsyncClient management for async mode
881
+ - Common pattern for overriding infer_async with client context
882
+
883
+ Subclasses should implement _call_api_async(filepath, client) and _call_api_sync(filepath).
884
+ """
885
+
886
+ async def infer_async(self, file_path, *args, **kwargs):
887
+ """Infer the layout of documents with concurrent processing.
888
+
889
+ Creates an httpx.AsyncClient and passes it to the parent's infer_async.
890
+
891
+ Args:
892
+ file_path (str): the path to the file or directory containing the documents to process
893
+
894
+ Returns:
895
+ dict: Final processed results
896
+ """
897
+ import httpx
898
+ async with httpx.AsyncClient() as client:
899
+ return await super().infer_async(file_path, client=client, *args, **kwargs)
scripts/doc_grouping.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Utility functions for grouping per-page results into document-level results.
3
+
4
+ This module provides common functionality for:
5
+ - Extracting page numbers from image filenames
6
+ - Converting page-level keys to document-level keys
7
+ - Grouping page-level elements into document-level structure
8
+
9
+ Usage:
10
+ from doc_grouping import group_pages_to_documents
11
+
12
+ # Group page-level results into document-level
13
+ doc_results = group_pages_to_documents(
14
+ page_results,
15
+ file_ext_mapping={"jpg": "pdf"},
16
+ )
17
+ """
18
+
19
+ import re
20
+ from typing import Dict, List, Optional, Tuple
21
+
22
+
23
+ def extract_page_number(image_key: str) -> int:
24
+ """
25
+ Extract page number from image key.
26
+
27
+ Supports formats like:
28
+ - 'document_page0001.jpg' -> 1
29
+ - 'document_page0042.pdf' -> 42
30
+
31
+ Args:
32
+ image_key: Image filename with page suffix
33
+
34
+ Returns:
35
+ Page number (1-based), or 1 if no page number found
36
+ """
37
+ match = re.search(r"_page(\d{4})\.", image_key)
38
+ if match:
39
+ return int(match.group(1))
40
+ return 1
41
+
42
+
43
+ def to_doc_key(image_key: str) -> str:
44
+ """
45
+ Convert page image key to document key by removing the page suffix.
46
+
47
+ Example:
48
+ 'mt_2020-06-25T04-07-00Z_s100j0q8_39-43_page0001.jpg'
49
+ -> 'mt_2020-06-25T04-07-00Z_s100j0q8_39-43.jpg'
50
+
51
+ Args:
52
+ image_key: Page-level image filename
53
+
54
+ Returns:
55
+ Document-level filename (without page suffix)
56
+ """
57
+ return re.sub(r"_page\d{4}(?=\.)", "", image_key)
58
+
59
+
60
+ def replace_ext(filename: str, file_ext_mapping: Dict[str, str]) -> str:
61
+ """
62
+ Replace file extension according to mapping.
63
+
64
+ Args:
65
+ filename: Original filename
66
+ file_ext_mapping: Dict mapping source extensions to destination extensions
67
+ e.g., {"jpg": "pdf", "jpeg": "pdf"}
68
+
69
+ Returns:
70
+ Filename with replaced extension
71
+ """
72
+ if not file_ext_mapping:
73
+ return filename
74
+ src_ext = filename.split(".")[-1]
75
+ dst_ext = file_ext_mapping.get(src_ext, src_ext)
76
+ return filename[: -(len(src_ext))] + dst_ext
77
+
78
+
79
+ def parse_ext_mapping(mapping_str: str) -> Dict[str, str]:
80
+ """
81
+ Parse file extension mapping from CLI string format.
82
+
83
+ Supports two formats:
84
+ - Colon format: "jpg:pdf" (single mapping)
85
+ - Arrow format: "jpg->pdf,jpeg->pdf" (multiple mappings)
86
+
87
+ Args:
88
+ mapping_str: String in format "src:dst" or "src1->dst1,src2->dst2"
89
+
90
+ Returns:
91
+ Dict mapping source extensions to destination extensions
92
+
93
+ Raises:
94
+ ValueError: If mapping format is invalid
95
+ """
96
+ file_ext_mapping = {}
97
+ if not mapping_str:
98
+ return file_ext_mapping
99
+
100
+ # Try colon format first (single mapping)
101
+ if ":" in mapping_str and "->" not in mapping_str:
102
+ parts = mapping_str.split(":")
103
+ if len(parts) == 2:
104
+ src, dst = parts[0].strip(), parts[1].strip()
105
+ if src and dst:
106
+ return {src: dst}
107
+
108
+ # Try arrow format (multiple mappings)
109
+ for pair in mapping_str.split(","):
110
+ pair = pair.strip()
111
+ if "->" in pair:
112
+ src, dst = pair.split("->", 1)
113
+ src = src.strip()
114
+ dst = dst.strip()
115
+ if src and dst:
116
+ file_ext_mapping[src] = dst
117
+ elif ":" in pair:
118
+ src, dst = pair.split(":", 1)
119
+ src = src.strip()
120
+ dst = dst.strip()
121
+ if src and dst:
122
+ file_ext_mapping[src] = dst
123
+
124
+ if not file_ext_mapping and mapping_str:
125
+ raise ValueError(
126
+ f"Invalid mapping format: '{mapping_str}'. "
127
+ "Use 'src:dst' or 'src1->dst1,src2->dst2' format."
128
+ )
129
+
130
+ return file_ext_mapping
131
+
132
+
133
+ def group_pages_to_documents(
134
+ page_data: Dict[str, Dict],
135
+ file_ext_mapping: Optional[Dict[str, str]] = None,
136
+ elements_key: str = "elements",
137
+ include_merged_tables: bool = True,
138
+ ) -> Dict[str, Dict]:
139
+ """
140
+ Group page-level results into document-level results.
141
+
142
+ Takes a dictionary of page-level results (keyed by page image filename)
143
+ and aggregates them into document-level results.
144
+
145
+ Args:
146
+ page_data: Dict mapping page filenames to their data.
147
+ Each page data should have an 'elements' list.
148
+ Example: {"doc_page0001.jpg": {"elements": [...]}, ...}
149
+ file_ext_mapping: Optional dict mapping source extensions to destination.
150
+ e.g., {"jpg": "pdf"} to output "doc.pdf" instead of "doc.jpg"
151
+ elements_key: Key name for the elements list in each page (default: "elements")
152
+ include_merged_tables: Whether to include empty "merged_tables" list (default: True)
153
+
154
+ Returns:
155
+ Dict mapping document filenames to aggregated data.
156
+ Example: {"doc.pdf": {"elements": [...], "merged_tables": []}}
157
+ """
158
+ if file_ext_mapping is None:
159
+ file_ext_mapping = {}
160
+
161
+ result: Dict[str, Dict] = {}
162
+ id_counters: Dict[str, int] = {}
163
+
164
+ for page_key, page_content in page_data.items():
165
+ # Get document key and page number
166
+ doc_key_raw = to_doc_key(page_key)
167
+ doc_key = replace_ext(doc_key_raw, file_ext_mapping)
168
+ page_number = extract_page_number(page_key)
169
+
170
+ # Initialize document entry if not exists
171
+ if doc_key not in result:
172
+ result[doc_key] = {elements_key: []}
173
+ if include_merged_tables:
174
+ result[doc_key]["merged_tables"] = []
175
+ id_counters[doc_key] = 0
176
+
177
+ # Get elements from page data
178
+ elements = page_content.get(elements_key, [])
179
+
180
+ # Add each element with updated id and page number
181
+ for elem in elements:
182
+ # Create a copy to avoid modifying original
183
+ new_elem = elem.copy()
184
+ if "content" in elem:
185
+ new_elem["content"] = elem["content"].copy()
186
+
187
+ new_elem["id"] = id_counters[doc_key]
188
+ new_elem["page"] = page_number
189
+
190
+ result[doc_key][elements_key].append(new_elem)
191
+ id_counters[doc_key] += 1
192
+
193
+ return result
194
+
195
+
196
+ def is_multi_page_dataset(page_keys: List[str]) -> bool:
197
+ """
198
+ Check if the dataset contains multi-page documents.
199
+
200
+ Looks for page suffixes like '_page0001' in the filenames.
201
+
202
+ Args:
203
+ page_keys: List of page-level filenames
204
+
205
+ Returns:
206
+ True if any filename has a page suffix
207
+ """
208
+ page_pattern = re.compile(r"_page\d{4}\.")
209
+ return any(page_pattern.search(key) for key in page_keys)
210
+
scripts/infer_aws.py CHANGED
@@ -1,11 +1,14 @@
 
 
 
 
 
 
1
  import os
2
- import cv2
3
- import json
4
- import time
5
  import boto3
6
- import argparse
7
 
8
- from utils import read_file_paths, validate_json_save_path, load_json_file
9
 
10
  CATEGORY_MAP = {
11
  "LAYOUT_TEXT": "paragraph",
@@ -21,17 +24,43 @@ CATEGORY_MAP = {
21
  }
22
 
23
 
24
- class AWSInference:
 
 
25
  def __init__(
26
  self,
27
  save_path,
28
- input_formats=[".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"]
 
 
 
 
 
 
29
  ):
30
- """Initialize the AWSInference class
 
31
  Args:
32
  save_path (str): the json path to save the results
33
  input_formats (list, optional): the supported file formats.
 
 
 
 
 
 
34
  """
 
 
 
 
 
 
 
 
 
 
 
35
  AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID") or ""
36
  AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY") or ""
37
  AWS_REGION = os.getenv("AWS_REGION") or ""
@@ -47,40 +76,54 @@ class AWSInference:
47
  aws_secret_access_key=AWS_SECRET_ACCESS_KEY
48
  )
49
 
50
- self.s3 = boto3.resource("s3")
 
 
 
 
 
51
  self.s3_bucket_name = AWS_S3_BUCKET_NAME
52
 
53
- validate_json_save_path(save_path)
54
- self.save_path = save_path
55
- self.processed_data = load_json_file(save_path)
56
-
57
- self.formats = input_formats
 
 
 
 
 
 
58
 
59
  def post_process(self, data):
60
- def get_text(result, blocks_map):
61
- text = ""
62
- if "Relationships" in result:
63
- for relationship in result["Relationships"]:
64
- if relationship["Type"] == "CHILD":
65
- for child_id in relationship["Ids"]:
66
- word = blocks_map[child_id]
67
- if word["BlockType"] == "WORD":
68
- text += " " + word["Text"]
69
- return text[1:]
70
-
71
  processed_dict = {}
72
  for input_key in data.keys():
73
  output_data = data[input_key]
74
-
75
- processed_dict[input_key] = {
76
- "elements": []
77
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  all_elems = {}
80
  for page_data in output_data:
81
  for elem in page_data["Blocks"]:
82
- _id = elem["Id"]
83
- all_elems[_id] = elem
84
 
85
  for page_data in output_data:
86
  for idx, elem in enumerate(page_data["Blocks"]):
@@ -88,114 +131,86 @@ class AWSInference:
88
  continue
89
 
90
  if "LAYOUT" in elem["BlockType"] and elem["BlockType"] != "LAYOUT_TABLE":
91
-
92
  bbox = elem["Geometry"]["BoundingBox"]
 
93
 
94
- x = bbox["Left"]
95
- y = bbox["Top"]
96
- w = bbox["Width"]
97
- h = bbox["Height"]
98
-
99
- coord = [
100
- [x, y],
101
- [x + w, y],
102
- [x + w, y + h],
103
- [x, y + h]
104
- ]
105
  xy_coord = [{"x": x, "y": y} for x, y in coord]
106
-
107
  category = CATEGORY_MAP.get(elem["BlockType"], "paragraph")
108
 
109
  transcription = ""
110
-
111
- if elem["BlockType"] != "LAYOUT_FIGURE":
112
- for item in all_elems[elem["Id"]]["Relationships"]:
113
  for id_ in item["Ids"]:
114
  if all_elems[id_]["BlockType"] == "LINE":
115
- word = all_elems[id_]["Text"]
116
- transcription += word + "\n"
117
 
118
  data_dict = {
119
  "coordinates": xy_coord,
120
  "category": category,
121
  "id": idx,
122
- "content": {
123
- "text": transcription,
124
- "html": "",
125
- "markdown": ""
126
- }
127
  }
128
  processed_dict[input_key]["elements"].append(data_dict)
129
 
130
  elif elem["BlockType"] == "TABLE":
131
-
132
  bbox = elem["Geometry"]["BoundingBox"]
 
133
 
134
- x = bbox["Left"]
135
- y = bbox["Top"]
136
- w = bbox["Width"]
137
- h = bbox["Height"]
138
-
139
- coord = [
140
- [x, y],
141
- [x + w, y],
142
- [x + w, y + h],
143
- [x, y + h]
144
- ]
145
  xy_coord = [{"x": x, "y": y} for x, y in coord]
146
-
147
  category = CATEGORY_MAP.get(elem["BlockType"], "paragraph")
148
 
149
  table_cells = {}
150
  for relationship in elem["Relationships"]:
151
  if relationship["Type"] == "CHILD":
152
  for cell_id in relationship["Ids"]:
153
- cell_block = next((block for block in page_data["Blocks"] if block["Id"] == cell_id), None)
154
- if cell_block is not None and cell_block["BlockType"] == "CELL":
155
  row_index = cell_block["RowIndex"] - 1
156
  column_index = cell_block["ColumnIndex"] - 1
157
- row_span = cell_block["RowSpan"]
158
- column_span = cell_block["ColumnSpan"]
159
  table_cells[(row_index, column_index)] = {
160
  "block": cell_block,
161
- "span": (row_span, column_span),
162
- "text": get_text(cell_block, all_elems),
163
  }
164
- max_row_index = max(cell[0] for cell in table_cells.keys())
165
- max_column_index = max(cell[1] for cell in table_cells.keys())
 
 
 
 
 
166
  for relationship in elem["Relationships"]:
167
  if relationship["Type"] == "MERGED_CELL":
168
  for cell_id in relationship["Ids"]:
169
- cell_block = next((block for block in page_data["Blocks"] if block["Id"] == cell_id), None)
170
- if cell_block is not None and cell_block["BlockType"] == "MERGED_CELL":
171
  row_index = cell_block["RowIndex"] - 1
172
  column_index = cell_block["ColumnIndex"] - 1
173
  row_span = cell_block["RowSpan"]
174
  column_span = cell_block["ColumnSpan"]
175
  for i in range(row_span):
176
  for j in range(column_span):
177
- del table_cells[(row_index + i, column_index + j)]
178
  text = ""
179
- for child_ids in cell_block["Relationships"][0]["Ids"]:
180
- child_cell_block = next((block for block in page_data["Blocks"] if block["Id"] == child_ids), None)
181
- text += " " + get_text(child_cell_block, all_elems)
182
  table_cells[(row_index, column_index)] = {
183
  "block": cell_block,
184
  "span": (row_span, column_span),
185
- "text": text[1:],
186
  }
 
187
  html_table = "<table>"
188
-
189
- for row_index in range(max_row_index + 1):
190
  html_table += "<tr>"
191
- for column_index in range(max_column_index + 1):
192
  cell_data = table_cells.get((row_index, column_index))
193
  if cell_data:
194
- cell_block = cell_data["block"]
195
  row_span, column_span = cell_data["span"]
196
-
197
- cell_text = cell_data["text"]
198
- html_table += f"<td rowspan='{row_span}' colspan='{column_span}''>{cell_text}</td>"
199
  html_table += "</tr>"
200
  html_table += "</table>"
201
 
@@ -203,152 +218,124 @@ class AWSInference:
203
  "coordinates": xy_coord,
204
  "category": category,
205
  "id": idx,
206
- "content": {
207
- "text": "",
208
- "html": html_table,
209
- "markdown": ""
210
- }
211
  }
212
  processed_dict[input_key]["elements"].append(data_dict)
213
 
214
- for key in self.processed_data:
215
- processed_dict[key] = self.processed_data[key]
216
-
217
- return processed_dict
218
-
219
 
220
- def start_job(self, object_name):
 
 
221
  filename_with_ext = os.path.basename(object_name)
222
-
223
  print(f"uploading {filename_with_ext} to s3")
224
  self.s3.Bucket(self.s3_bucket_name).upload_file(object_name, filename_with_ext)
225
 
226
- response = None
227
  response = self.client.start_document_analysis(
228
  DocumentLocation={
229
- "S3Object": {
230
- "Bucket": self.s3_bucket_name,
231
- "Name": filename_with_ext
232
- }
233
  },
234
- FeatureTypes = ["LAYOUT", "TABLES"]
235
  )
236
-
237
  return response["JobId"]
238
 
239
- def is_job_complete(self, job_id):
 
 
240
  time.sleep(1)
241
  response = self.client.get_document_analysis(JobId=job_id)
242
  status = response["JobStatus"]
243
- print("Job status: {}".format(status))
244
 
245
- while(status == "IN_PROGRESS"):
246
  time.sleep(1)
247
  response = self.client.get_document_analysis(JobId=job_id)
248
  status = response["JobStatus"]
249
- print("Job status: {}".format(status))
250
 
251
  return status
252
 
253
- def get_job_results(self, job_id):
 
 
254
  pages = []
255
  time.sleep(1)
256
  response = self.client.get_document_analysis(JobId=job_id)
257
  pages.append(response)
258
- print("Resultset page received: {}".format(len(pages)))
259
- next_token = None
260
- if "NextToken" in response:
261
- next_token = response["NextToken"]
262
 
263
  while next_token:
264
  time.sleep(1)
265
- response = self.client.\
266
- get_document_analysis(JobId=job_id, NextToken=next_token)
267
  pages.append(response)
268
- print("Resultset page received: {}".format(len(pages)))
269
- next_token = None
270
- if "NextToken" in response:
271
- next_token = response["NextToken"]
272
 
273
  return pages
274
 
275
- def infer(self, file_path):
276
- """Infer the layout of the documents in the given file path
277
- Args:
278
- file_path (str): the path to the file or directory containing the documents to process
279
- """
280
- paths = read_file_paths(file_path, supported_formats=self.formats)
281
-
282
- error_files = []
283
-
284
- result_dict = {}
285
- for idx, filepath in enumerate(paths):
286
- print("({}/{}) {}".format(idx+1, len(paths), filepath))
287
-
288
- filename = filepath.name
289
- if filename in self.processed_data.keys():
290
- print(f"'{filename}' is already in the loaded dictionary. Skipping this sample")
291
- continue
292
-
293
- try:
294
- if os.path.splitext(filepath)[-1] == ".pdf":
295
- job_id = self.start_job(filepath)
296
- print("Started job with id: {}".format(job_id))
297
- if self.is_job_complete(job_id):
298
- result = self.get_job_results(job_id)
299
- else:
300
- with open(filepath, "rb") as file:
301
- img_test = file.read()
302
- bytes_test = bytearray(img_test)
303
-
304
- result = self.client.analyze_document(
305
- Document={"Bytes": bytes_test},
306
- FeatureTypes = ["LAYOUT", "TABLES"]
307
- )
308
- except Exception as e:
309
- print(e)
310
- print("Error processing document..")
311
- error_files.append(filepath)
312
- continue
313
-
314
- result_dict[filename] = result
315
-
316
- result_dict = self.post_process(result_dict)
317
-
318
- with open(self.save_path, "w", encoding="utf-8") as f:
319
- json.dump(result_dict, f, ensure_ascii=False, indent=4)
320
-
321
- for error_file in error_files:
322
- print(f"Error processing file: {error_file}")
323
-
324
- print("Finished processing all documents")
325
- print("Results saved to: {}".format(self.save_path))
326
- print("Number of errors: {}".format(len(error_files)))
327
 
328
 
329
  if __name__ == "__main__":
330
- args = argparse.ArgumentParser()
331
- args.add_argument(
332
- "--data_path",
333
- type=str, default="", required=True,
334
- help="Path containing the documents to process"
335
- )
336
- args.add_argument(
337
- "--save_path",
338
- type=str, default="", required=True,
339
- help="Path to save the results"
340
- )
341
- args.add_argument(
342
- "--input_formats",
343
- type=list, default=[
344
- ".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"
345
- ],
346
- help="Supported input file formats"
347
- )
348
- args = args.parse_args()
349
 
350
  aws_inference = AWSInference(
351
  args.save_path,
352
- input_formats=args.input_formats
 
 
 
 
 
 
353
  )
354
  aws_inference.infer(args.data_path)
 
1
+ """
2
+ AWS Textract document layout inference.
3
+
4
+ Uses AWS Textract API for document analysis.
5
+ """
6
+ import asyncio
7
  import os
8
+
 
 
9
  import boto3
 
10
 
11
+ from base import BaseInference, create_argument_parser, parse_args_with_extra
12
 
13
  CATEGORY_MAP = {
14
  "LAYOUT_TEXT": "paragraph",
 
24
  }
25
 
26
 
27
+ class AWSInference(BaseInference):
28
+ """AWS Textract document layout inference."""
29
+
30
  def __init__(
31
  self,
32
  save_path,
33
+ input_formats=None,
34
+ concurrent_limit=None,
35
+ sampling_rate=1.0,
36
+ request_timeout=600,
37
+ random_seed=None,
38
+ group_by_document=False,
39
+ file_ext_mapping=None
40
  ):
41
+ """Initialize the AWSInference class.
42
+
43
  Args:
44
  save_path (str): the json path to save the results
45
  input_formats (list, optional): the supported file formats.
46
+ concurrent_limit (int, optional): maximum number of concurrent API requests
47
+ sampling_rate (float, optional): fraction of files to process (0.0-1.0)
48
+ request_timeout (float, optional): timeout in seconds for API requests
49
+ random_seed (int, optional): random seed for reproducible sampling
50
+ group_by_document (bool, optional): group per-page results into document-level
51
+ file_ext_mapping (str or dict, optional): file extension mapping for grouping
52
  """
53
+ super().__init__(
54
+ save_path,
55
+ input_formats,
56
+ concurrent_limit,
57
+ sampling_rate,
58
+ request_timeout,
59
+ random_seed,
60
+ group_by_document,
61
+ file_ext_mapping
62
+ )
63
+
64
  AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID") or ""
65
  AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY") or ""
66
  AWS_REGION = os.getenv("AWS_REGION") or ""
 
76
  aws_secret_access_key=AWS_SECRET_ACCESS_KEY
77
  )
78
 
79
+ self.s3 = boto3.resource(
80
+ "s3",
81
+ region_name=AWS_REGION,
82
+ aws_access_key_id=AWS_ACCESS_KEY_ID,
83
+ aws_secret_access_key=AWS_SECRET_ACCESS_KEY
84
+ )
85
  self.s3_bucket_name = AWS_S3_BUCKET_NAME
86
 
87
+ def _get_text(self, result, blocks_map):
88
+ """Extract text from a block using its relationships."""
89
+ text = ""
90
+ if "Relationships" in result:
91
+ for relationship in result["Relationships"]:
92
+ if relationship["Type"] == "CHILD":
93
+ for child_id in relationship["Ids"]:
94
+ word = blocks_map[child_id]
95
+ if word["BlockType"] == "WORD":
96
+ text += " " + word["Text"]
97
+ return text[1:] if text else ""
98
 
99
  def post_process(self, data):
100
+ """Post-process AWS Textract API response to standard format."""
 
 
 
 
 
 
 
 
 
 
101
  processed_dict = {}
102
  for input_key in data.keys():
103
  output_data = data[input_key]
104
+
105
+ # Normalize output_data to always be a list of pages
106
+ if isinstance(output_data, dict) and "Blocks" in output_data:
107
+ output_data = [output_data]
108
+
109
+ # Handle time_sec if present
110
+ time_sec = None
111
+ if isinstance(output_data, dict) and "time_sec" in output_data:
112
+ time_sec = output_data["time_sec"]
113
+ if "Blocks" in output_data:
114
+ output_data = [output_data]
115
+ elif isinstance(output_data.get("data"), list):
116
+ output_data = output_data["data"]
117
+
118
+ processed_dict[input_key] = {"elements": []}
119
+
120
+ if time_sec is not None:
121
+ processed_dict[input_key]["time_sec"] = time_sec
122
 
123
  all_elems = {}
124
  for page_data in output_data:
125
  for elem in page_data["Blocks"]:
126
+ all_elems[elem["Id"]] = elem
 
127
 
128
  for page_data in output_data:
129
  for idx, elem in enumerate(page_data["Blocks"]):
 
131
  continue
132
 
133
  if "LAYOUT" in elem["BlockType"] and elem["BlockType"] != "LAYOUT_TABLE":
 
134
  bbox = elem["Geometry"]["BoundingBox"]
135
+ x, y, w, h = bbox["Left"], bbox["Top"], bbox["Width"], bbox["Height"]
136
 
137
+ coord = [[x, y], [x + w, y], [x + w, y + h], [x, y + h]]
 
 
 
 
 
 
 
 
 
 
138
  xy_coord = [{"x": x, "y": y} for x, y in coord]
 
139
  category = CATEGORY_MAP.get(elem["BlockType"], "paragraph")
140
 
141
  transcription = ""
142
+ if elem["BlockType"] not in ["LAYOUT_FIGURE", "LAYOUT_KEY_VALUE"]:
143
+ for item in all_elems[elem["Id"]].get("Relationships", []):
 
144
  for id_ in item["Ids"]:
145
  if all_elems[id_]["BlockType"] == "LINE":
146
+ transcription += all_elems[id_]["Text"] + "\n"
 
147
 
148
  data_dict = {
149
  "coordinates": xy_coord,
150
  "category": category,
151
  "id": idx,
152
+ "content": {"text": transcription, "html": "", "markdown": ""}
 
 
 
 
153
  }
154
  processed_dict[input_key]["elements"].append(data_dict)
155
 
156
  elif elem["BlockType"] == "TABLE":
 
157
  bbox = elem["Geometry"]["BoundingBox"]
158
+ x, y, w, h = bbox["Left"], bbox["Top"], bbox["Width"], bbox["Height"]
159
 
160
+ coord = [[x, y], [x + w, y], [x + w, y + h], [x, y + h]]
 
 
 
 
 
 
 
 
 
 
161
  xy_coord = [{"x": x, "y": y} for x, y in coord]
 
162
  category = CATEGORY_MAP.get(elem["BlockType"], "paragraph")
163
 
164
  table_cells = {}
165
  for relationship in elem["Relationships"]:
166
  if relationship["Type"] == "CHILD":
167
  for cell_id in relationship["Ids"]:
168
+ cell_block = next((b for b in page_data["Blocks"] if b["Id"] == cell_id), None)
169
+ if cell_block and cell_block["BlockType"] == "CELL":
170
  row_index = cell_block["RowIndex"] - 1
171
  column_index = cell_block["ColumnIndex"] - 1
 
 
172
  table_cells[(row_index, column_index)] = {
173
  "block": cell_block,
174
+ "span": (cell_block["RowSpan"], cell_block["ColumnSpan"]),
175
+ "text": self._get_text(cell_block, all_elems),
176
  }
177
+
178
+ if not table_cells:
179
+ continue
180
+
181
+ max_row = max(c[0] for c in table_cells.keys())
182
+ max_col = max(c[1] for c in table_cells.keys())
183
+
184
  for relationship in elem["Relationships"]:
185
  if relationship["Type"] == "MERGED_CELL":
186
  for cell_id in relationship["Ids"]:
187
+ cell_block = next((b for b in page_data["Blocks"] if b["Id"] == cell_id), None)
188
+ if cell_block and cell_block["BlockType"] == "MERGED_CELL":
189
  row_index = cell_block["RowIndex"] - 1
190
  column_index = cell_block["ColumnIndex"] - 1
191
  row_span = cell_block["RowSpan"]
192
  column_span = cell_block["ColumnSpan"]
193
  for i in range(row_span):
194
  for j in range(column_span):
195
+ table_cells.pop((row_index + i, column_index + j), None)
196
  text = ""
197
+ for child_ids in cell_block.get("Relationships", [{}])[0].get("Ids", []):
198
+ child_cell = next((b for b in page_data["Blocks"] if b["Id"] == child_ids), None)
199
+ text += " " + self._get_text(child_cell, all_elems) if child_cell else ""
200
  table_cells[(row_index, column_index)] = {
201
  "block": cell_block,
202
  "span": (row_span, column_span),
203
+ "text": text[1:] if text else "",
204
  }
205
+
206
  html_table = "<table>"
207
+ for row_index in range(max_row + 1):
 
208
  html_table += "<tr>"
209
+ for column_index in range(max_col + 1):
210
  cell_data = table_cells.get((row_index, column_index))
211
  if cell_data:
 
212
  row_span, column_span = cell_data["span"]
213
+ html_table += f"<td rowspan='{row_span}' colspan='{column_span}'>{cell_data['text']}</td>"
 
 
214
  html_table += "</tr>"
215
  html_table += "</table>"
216
 
 
218
  "coordinates": xy_coord,
219
  "category": category,
220
  "id": idx,
221
+ "content": {"text": "", "html": html_table, "markdown": ""}
 
 
 
 
222
  }
223
  processed_dict[input_key]["elements"].append(data_dict)
224
 
225
+ return self._merge_processed_data(processed_dict)
 
 
 
 
226
 
227
+ def _start_job(self, object_name):
228
+ """Start a Textract job for PDF processing."""
229
+ import time
230
  filename_with_ext = os.path.basename(object_name)
 
231
  print(f"uploading {filename_with_ext} to s3")
232
  self.s3.Bucket(self.s3_bucket_name).upload_file(object_name, filename_with_ext)
233
 
 
234
  response = self.client.start_document_analysis(
235
  DocumentLocation={
236
+ "S3Object": {"Bucket": self.s3_bucket_name, "Name": filename_with_ext}
 
 
 
237
  },
238
+ FeatureTypes=["LAYOUT", "TABLES"]
239
  )
 
240
  return response["JobId"]
241
 
242
+ def _is_job_complete(self, job_id):
243
+ """Check if Textract job is complete."""
244
+ import time
245
  time.sleep(1)
246
  response = self.client.get_document_analysis(JobId=job_id)
247
  status = response["JobStatus"]
248
+ print(f"Job status: {status}")
249
 
250
+ while status == "IN_PROGRESS":
251
  time.sleep(1)
252
  response = self.client.get_document_analysis(JobId=job_id)
253
  status = response["JobStatus"]
254
+ print(f"Job status: {status}")
255
 
256
  return status
257
 
258
+ def _get_job_results(self, job_id):
259
+ """Get all pages of Textract job results."""
260
+ import time
261
  pages = []
262
  time.sleep(1)
263
  response = self.client.get_document_analysis(JobId=job_id)
264
  pages.append(response)
265
+ print(f"Resultset page received: {len(pages)}")
266
+ next_token = response.get("NextToken")
 
 
267
 
268
  while next_token:
269
  time.sleep(1)
270
+ response = self.client.get_document_analysis(JobId=job_id, NextToken=next_token)
 
271
  pages.append(response)
272
+ print(f"Resultset page received: {len(pages)}")
273
+ next_token = response.get("NextToken")
 
 
274
 
275
  return pages
276
 
277
+ async def _call_api_async(self, filepath, *args, **kwargs):
278
+ """Make the actual async API call for a file."""
279
+ loop = asyncio.get_event_loop()
280
+
281
+ if filepath.suffix.lower() == ".pdf":
282
+ job_id = await loop.run_in_executor(None, self._start_job, str(filepath))
283
+ print(f"Started job with id: {job_id}")
284
+
285
+ status = await loop.run_in_executor(None, self._is_job_complete, job_id)
286
+ if status == "SUCCEEDED":
287
+ result = await loop.run_in_executor(None, self._get_job_results, job_id)
288
+ else:
289
+ raise Exception(f"Job {job_id} failed with status: {status}")
290
+ else:
291
+ with open(filepath, "rb") as file:
292
+ img_test = file.read()
293
+ bytes_test = bytearray(img_test)
294
+
295
+ result = await loop.run_in_executor(
296
+ None,
297
+ lambda: self.client.analyze_document(
298
+ Document={"Bytes": bytes_test},
299
+ FeatureTypes=["LAYOUT", "TABLES"]
300
+ )
301
+ )
302
+
303
+ return result
304
+
305
+ def _call_api_sync(self, filepath, *args, **kwargs):
306
+ """Make the actual sync API call for a file."""
307
+ if filepath.suffix.lower() == ".pdf":
308
+ job_id = self._start_job(str(filepath))
309
+ print(f"Started job with id: {job_id}")
310
+ status = self._is_job_complete(job_id)
311
+ if status == "SUCCEEDED":
312
+ result = self._get_job_results(job_id)
313
+ else:
314
+ raise Exception(f"Job {job_id} failed with status: {status}")
315
+ else:
316
+ with open(filepath, "rb") as file:
317
+ bytes_test = bytearray(file.read())
318
+
319
+ result = self.client.analyze_document(
320
+ Document={"Bytes": bytes_test},
321
+ FeatureTypes=["LAYOUT", "TABLES"]
322
+ )
323
+
324
+ return result
 
 
 
 
325
 
326
 
327
  if __name__ == "__main__":
328
+ parser = create_argument_parser("AWS Textract document layout inference")
329
+ args = parse_args_with_extra(parser)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
330
 
331
  aws_inference = AWSInference(
332
  args.save_path,
333
+ input_formats=args.input_formats,
334
+ concurrent_limit=args.concurrent,
335
+ sampling_rate=args.sampling_rate,
336
+ request_timeout=args.request_timeout,
337
+ random_seed=args.random_seed,
338
+ group_by_document=args.group_by_document,
339
+ file_ext_mapping=args.file_ext_mapping
340
  )
341
  aws_inference.infer(args.data_path)
scripts/infer_google.py CHANGED
@@ -1,15 +1,20 @@
1
- import os
2
- import json
3
- import google
4
- import argparse
5
 
6
- from glob import glob
7
- from typing import Optional, Sequence
 
 
 
 
 
8
 
 
9
  from google.api_core.client_options import ClientOptions
10
  from google.cloud import documentai
 
11
 
12
- from utils import read_file_paths, validate_json_save_path, load_json_file
13
 
14
  CATEGORY_MAP = {
15
  "paragraph": "paragraph",
@@ -23,17 +28,43 @@ CATEGORY_MAP = {
23
  }
24
 
25
 
26
- class GoogleInference:
 
 
27
  def __init__(
28
  self,
29
  save_path,
30
- input_formats=[".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"]
 
 
 
 
 
 
31
  ):
32
- """Initialize the GoogleInference class
 
33
  Args:
34
  save_path (str): the json path to save the results
35
  input_formats (list, optional): the supported file formats.
 
 
 
 
 
 
36
  """
 
 
 
 
 
 
 
 
 
 
 
37
  self.project_id = os.getenv("GOOGLE_PROJECT_ID") or ""
38
  self.processor_id = os.getenv("GOOGLE_PROCESSOR_ID") or ""
39
  self.location = os.getenv("GOOGLE_LOCATION") or ""
@@ -44,21 +75,37 @@ class GoogleInference:
44
 
45
  self.processor_version = "rc"
46
 
47
- validate_json_save_path(save_path)
48
- self.save_path = save_path
49
- self.processed_data = load_json_file(save_path)
50
-
51
- self.formats = input_formats
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  @staticmethod
54
  def generate_html_table(table_data):
 
55
  html = "<table border='1'>\n"
56
 
57
- # Process body rows
58
  for row in table_data["bodyRows"]:
59
  html += " <tr>\n"
60
  for cell in row["cells"]:
61
- text = cell["blocks"][0]["textBlock"]["text"] if cell["blocks"] else ""
 
 
62
  row_span = f" rowspan='{cell['rowSpan']}'" if cell["rowSpan"] > 1 else ""
63
  col_span = f" colspan='{cell['colSpan']}'" if cell["colSpan"] > 1 else ""
64
  html += f" <td{row_span}{col_span}>{text}</td>\n"
@@ -69,6 +116,7 @@ class GoogleInference:
69
 
70
  @staticmethod
71
  def iterate_blocks(data):
 
72
  block_sequence = []
73
 
74
  def recurse_blocks(blocks):
@@ -78,17 +126,13 @@ class GoogleInference:
78
  block_text = block.get("textBlock", {}).get("text", "")
79
 
80
  if block_type:
81
- # Append block information as a tuple to the sequence list
82
  block_sequence.append((block_id, block_type, block_text))
83
 
84
- block_id = block.get("blockId", "")
85
  block_table = block.get("tableBlock", {})
86
-
87
  if block_table:
88
  block_table_html = GoogleInference.generate_html_table(block_table)
89
  block_sequence.append((block_id, "table", block_table_html))
90
 
91
- # If the block contains sub-blocks, recurse through them
92
  if block.get("textBlock", {}).get("blocks", []):
93
  recurse_blocks(block["textBlock"]["blocks"])
94
 
@@ -98,14 +142,12 @@ class GoogleInference:
98
  return block_sequence
99
 
100
  def post_process(self, data):
101
-
102
  processed_dict = {}
103
  for input_key in data.keys():
104
  output_data = data[input_key]
105
 
106
- processed_dict[input_key] = {
107
- "elements": []
108
- }
109
 
110
  blocks = self.iterate_blocks(output_data)
111
 
@@ -114,7 +156,7 @@ class GoogleInference:
114
  category = CATEGORY_MAP.get(category, "paragraph")
115
 
116
  data_dict = {
117
- "coordinates": [[0, 0], [0, 0], [0, 0], [0, 0]],
118
  "category": category,
119
  "id": id_counter,
120
  "content": {
@@ -124,15 +166,12 @@ class GoogleInference:
124
  }
125
  }
126
  processed_dict[input_key]["elements"].append(data_dict)
127
-
128
  id_counter += 1
129
 
130
- for key in self.processed_data:
131
- processed_dict[key] = self.processed_data[key]
132
-
133
- return processed_dict
134
 
135
- def process_document_layout_sample(self, file_path, mime_type, chunk_size=1000) -> None:
 
136
  process_options = documentai.ProcessOptions(
137
  layout_config=documentai.ProcessOptions.LayoutConfig(
138
  chunking_config=documentai.ProcessOptions.LayoutConfig.ChunkingConfig(
@@ -141,122 +180,83 @@ class GoogleInference:
141
  )
142
  )
143
  )
144
- document = self.process_document(
145
- file_path,
146
- mime_type,
147
- process_options=process_options,
148
- )
149
-
150
- document_dict = json.loads(google.cloud.documentai_v1.Document.to_json(document))
151
 
152
- return document_dict
153
-
154
- def process_document(
155
- self, file_path,
156
  mime_type: str,
157
  process_options: Optional[documentai.ProcessOptions] = None,
158
  ) -> documentai.Document:
 
159
  client = documentai.DocumentProcessorServiceClient(
160
- client_options=ClientOptions(
161
- api_endpoint=f"{self.endpoint}"
162
- )
163
  )
164
 
165
- with open(file_path, "rb") as image:
166
- image_content = image.read()
 
 
 
 
 
 
167
 
168
  name = client.processor_version_path(
169
- self.project_id,
170
- self.location,
171
- self.processor_id,
172
- self.processor_version
173
  )
174
  request = documentai.ProcessRequest(
175
  name=name,
176
- raw_document=documentai.RawDocument(
177
- content=image_content, mime_type=mime_type
178
- ),
179
  process_options=process_options,
180
  )
181
 
182
  result = client.process_document(request=request)
183
-
184
  return result.document
185
 
186
- def infer(self, file_path):
187
- """Infer the layout of the documents in the given file path
188
- Args:
189
- file_path (str): the path to the file or directory containing the documents to process
190
- """
191
- paths = read_file_paths(file_path, supported_formats=self.formats)
192
-
193
- error_files = []
194
-
195
- result_dict = {}
196
- for idx, filepath in enumerate(paths):
197
- print("({}/{}) {}".format(idx+1, len(paths), filepath))
198
-
199
- if filepath.suffix == ".pdf":
200
- mime_type = "application/pdf"
201
- elif filepath.suffix == ".jpg" or filepath.suffix == ".jpeg":
202
- mime_type = "image/jpeg"
203
- elif filepath.suffix == ".png":
204
- mime_type = "image/png"
205
- else:
206
- raise NotImplementedError
207
-
208
- filename = filepath.name
209
-
210
- if filename in self.processed_data.keys():
211
- print(f"'{filename}' is already in the loaded dictionary. Skipping this sample")
212
- continue
213
-
214
- try:
215
- document_dict = self.process_document_layout_sample(filepath, mime_type)
216
- except Exception as e:
217
- print(e)
218
- print("Error processing document..")
219
- error_files.append(filepath)
220
- continue
221
-
222
- result_dict[filename] = document_dict
223
-
224
- result_dict = self.post_process(result_dict)
225
-
226
- with open(self.save_path, "w") as f:
227
- json.dump(result_dict, f)
228
-
229
- for error_file in error_files:
230
- print(f"Error processing file: {error_file}")
231
 
232
- print("Finished processing all documents")
233
- print("Results saved to: {}".format(self.save_path))
234
- print("Number of errors: {}".format(len(error_files)))
 
235
 
236
 
237
  if __name__ == "__main__":
238
- args = argparse.ArgumentParser()
239
- args.add_argument(
240
- "--data_path",
241
- type=str, default="", required=True,
242
- help="Path containing the documents to process"
243
- )
244
- args.add_argument(
245
- "--save_path",
246
- type=str, default="", required=True,
247
- help="Path to save the results"
248
- )
249
- args.add_argument(
250
- "--input_formats",
251
- type=list, default=[
252
- ".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"
253
- ],
254
- help="Supported input file formats"
255
- )
256
- args = args.parse_args()
257
 
258
  google_inference = GoogleInference(
259
  args.save_path,
260
- input_formats=args.input_formats
 
 
 
 
 
 
261
  )
262
  google_inference.infer(args.data_path)
 
1
+ """
2
+ Google Document AI layout inference.
 
 
3
 
4
+ Uses Google Cloud Document AI for document analysis.
5
+ """
6
+ import asyncio
7
+ import io
8
+ import json
9
+ import os
10
+ from typing import Optional
11
 
12
+ import google
13
  from google.api_core.client_options import ClientOptions
14
  from google.cloud import documentai
15
+ from PIL import Image
16
 
17
+ from base import BaseInference, create_argument_parser, parse_args_with_extra
18
 
19
  CATEGORY_MAP = {
20
  "paragraph": "paragraph",
 
28
  }
29
 
30
 
31
+ class GoogleInference(BaseInference):
32
+ """Google Document AI layout inference."""
33
+
34
  def __init__(
35
  self,
36
  save_path,
37
+ input_formats=None,
38
+ concurrent_limit=None,
39
+ sampling_rate=1.0,
40
+ request_timeout=600,
41
+ random_seed=None,
42
+ group_by_document=False,
43
+ file_ext_mapping=None
44
  ):
45
+ """Initialize the GoogleInference class.
46
+
47
  Args:
48
  save_path (str): the json path to save the results
49
  input_formats (list, optional): the supported file formats.
50
+ concurrent_limit (int, optional): maximum number of concurrent API requests
51
+ sampling_rate (float, optional): fraction of files to process (0.0-1.0)
52
+ request_timeout (float, optional): timeout in seconds for API requests
53
+ random_seed (int, optional): random seed for reproducible sampling
54
+ group_by_document (bool, optional): group per-page results into document-level
55
+ file_ext_mapping (str or dict, optional): file extension mapping for grouping
56
  """
57
+ super().__init__(
58
+ save_path,
59
+ input_formats,
60
+ concurrent_limit,
61
+ sampling_rate,
62
+ request_timeout,
63
+ random_seed,
64
+ group_by_document,
65
+ file_ext_mapping
66
+ )
67
+
68
  self.project_id = os.getenv("GOOGLE_PROJECT_ID") or ""
69
  self.processor_id = os.getenv("GOOGLE_PROCESSOR_ID") or ""
70
  self.location = os.getenv("GOOGLE_LOCATION") or ""
 
75
 
76
  self.processor_version = "rc"
77
 
78
+ @staticmethod
79
+ def convert_image_to_pdf_bytes(image_path: str) -> bytes:
80
+ """Convert an image file to PDF bytes for Layout Parser compatibility.
81
+
82
+ Args:
83
+ image_path: Path to the image file
84
+
85
+ Returns:
86
+ PDF content as bytes
87
+ """
88
+ with Image.open(image_path) as img:
89
+ # Convert to RGB if necessary (e.g., for RGBA or P mode images)
90
+ if img.mode in ('RGBA', 'P', 'LA'):
91
+ img = img.convert('RGB')
92
+
93
+ pdf_buffer = io.BytesIO()
94
+ img.save(pdf_buffer, format='PDF')
95
+ pdf_buffer.seek(0)
96
+ return pdf_buffer.read()
97
 
98
  @staticmethod
99
  def generate_html_table(table_data):
100
+ """Generate HTML table from table data."""
101
  html = "<table border='1'>\n"
102
 
 
103
  for row in table_data["bodyRows"]:
104
  html += " <tr>\n"
105
  for cell in row["cells"]:
106
+ text = ""
107
+ if cell["blocks"]:
108
+ text = cell["blocks"][0].get("textBlock", {}).get("text", "")
109
  row_span = f" rowspan='{cell['rowSpan']}'" if cell["rowSpan"] > 1 else ""
110
  col_span = f" colspan='{cell['colSpan']}'" if cell["colSpan"] > 1 else ""
111
  html += f" <td{row_span}{col_span}>{text}</td>\n"
 
116
 
117
  @staticmethod
118
  def iterate_blocks(data):
119
+ """Iterate through document blocks and extract content."""
120
  block_sequence = []
121
 
122
  def recurse_blocks(blocks):
 
126
  block_text = block.get("textBlock", {}).get("text", "")
127
 
128
  if block_type:
 
129
  block_sequence.append((block_id, block_type, block_text))
130
 
 
131
  block_table = block.get("tableBlock", {})
 
132
  if block_table:
133
  block_table_html = GoogleInference.generate_html_table(block_table)
134
  block_sequence.append((block_id, "table", block_table_html))
135
 
 
136
  if block.get("textBlock", {}).get("blocks", []):
137
  recurse_blocks(block["textBlock"]["blocks"])
138
 
 
142
  return block_sequence
143
 
144
  def post_process(self, data):
145
+ """Post-process Google Document AI API response to standard format."""
146
  processed_dict = {}
147
  for input_key in data.keys():
148
  output_data = data[input_key]
149
 
150
+ processed_dict[input_key] = {"elements": []}
 
 
151
 
152
  blocks = self.iterate_blocks(output_data)
153
 
 
156
  category = CATEGORY_MAP.get(category, "paragraph")
157
 
158
  data_dict = {
159
+ "coordinates": [{"x": 0, "y": 0}] * 4,
160
  "category": category,
161
  "id": id_counter,
162
  "content": {
 
166
  }
167
  }
168
  processed_dict[input_key]["elements"].append(data_dict)
 
169
  id_counter += 1
170
 
171
+ return self._merge_processed_data(processed_dict)
 
 
 
172
 
173
+ def _process_document_layout_sample(self, file_path, mime_type, chunk_size=1000):
174
+ """Process document with layout analysis."""
175
  process_options = documentai.ProcessOptions(
176
  layout_config=documentai.ProcessOptions.LayoutConfig(
177
  chunking_config=documentai.ProcessOptions.LayoutConfig.ChunkingConfig(
 
180
  )
181
  )
182
  )
183
+ document = self._process_document(file_path, mime_type, process_options=process_options)
184
+ return json.loads(google.cloud.documentai_v1.Document.to_json(document))
 
 
 
 
 
185
 
186
+ def _process_document(
187
+ self,
188
+ file_path,
 
189
  mime_type: str,
190
  process_options: Optional[documentai.ProcessOptions] = None,
191
  ) -> documentai.Document:
192
+ """Process a single document."""
193
  client = documentai.DocumentProcessorServiceClient(
194
+ client_options=ClientOptions(api_endpoint=self.endpoint)
 
 
195
  )
196
 
197
+ # Convert image to PDF for Layout Parser compatibility
198
+ image_mime_types = ["image/jpeg", "image/png", "image/bmp", "image/tiff", "image/heic"]
199
+ if mime_type in image_mime_types:
200
+ file_content = self.convert_image_to_pdf_bytes(file_path)
201
+ mime_type = "application/pdf"
202
+ else:
203
+ with open(file_path, "rb") as f:
204
+ file_content = f.read()
205
 
206
  name = client.processor_version_path(
207
+ self.project_id, self.location, self.processor_id, self.processor_version
 
 
 
208
  )
209
  request = documentai.ProcessRequest(
210
  name=name,
211
+ raw_document=documentai.RawDocument(content=file_content, mime_type=mime_type),
 
 
212
  process_options=process_options,
213
  )
214
 
215
  result = client.process_document(request=request)
 
216
  return result.document
217
 
218
+ def _get_mime_type(self, filepath):
219
+ """Get MIME type from file path."""
220
+ suffix = filepath.suffix.lower()
221
+ mime_types = {
222
+ ".pdf": "application/pdf",
223
+ ".jpg": "image/jpeg",
224
+ ".jpeg": "image/jpeg",
225
+ ".png": "image/png",
226
+ ".bmp": "image/bmp",
227
+ ".tiff": "image/tiff",
228
+ ".heic": "image/heic"
229
+ }
230
+ if suffix not in mime_types:
231
+ raise NotImplementedError(f"Unsupported file type: {suffix}")
232
+ return mime_types[suffix]
233
+
234
+ async def _call_api_async(self, filepath, *args, **kwargs):
235
+ """Make the actual async API call for a file."""
236
+ mime_type = self._get_mime_type(filepath)
237
+ loop = asyncio.get_event_loop()
238
+ return await loop.run_in_executor(
239
+ None, self._process_document_layout_sample, filepath, mime_type
240
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
241
 
242
+ def _call_api_sync(self, filepath, *args, **kwargs):
243
+ """Make the actual sync API call for a file."""
244
+ mime_type = self._get_mime_type(filepath)
245
+ return self._process_document_layout_sample(filepath, mime_type)
246
 
247
 
248
  if __name__ == "__main__":
249
+ parser = create_argument_parser("Google Document AI layout inference")
250
+ args = parse_args_with_extra(parser)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
251
 
252
  google_inference = GoogleInference(
253
  args.save_path,
254
+ input_formats=args.input_formats,
255
+ concurrent_limit=args.concurrent,
256
+ sampling_rate=args.sampling_rate,
257
+ request_timeout=args.request_timeout,
258
+ random_seed=args.random_seed,
259
+ group_by_document=args.group_by_document,
260
+ file_ext_mapping=args.file_ext_mapping
261
  )
262
  google_inference.infer(args.data_path)
scripts/infer_llamaparse.py CHANGED
@@ -1,14 +1,16 @@
 
 
 
 
 
 
1
  import os
2
  import time
3
- import json
4
  import markdown
5
  import requests
6
- import argparse
7
- from pathlib import Path
8
-
9
- from bs4 import BeautifulSoup
10
- from utils import read_file_paths, validate_json_save_path, load_json_file
11
 
 
12
 
13
  CATEGORY_MAP = {
14
  "text": "paragraph",
@@ -17,36 +19,83 @@ CATEGORY_MAP = {
17
  }
18
 
19
 
20
- class LlamaParseInference:
 
 
21
  def __init__(
22
  self,
23
  save_path,
24
- input_formats=[".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"]
 
 
 
 
 
 
 
25
  ):
26
- """Initialize the LlamaParseInference class
 
27
  Args:
28
  save_path (str): the json path to save the results
29
  input_formats (list, optional): the supported file formats.
 
 
 
 
 
 
 
30
  """
31
- self.formats = input_formats
32
-
 
 
 
 
 
 
 
 
 
 
33
  self.api_key = os.getenv("LLAMAPARSE_API_KEY") or ""
34
- self.post_url = os.getenv("LLAMAPARSE_POST_URL") or ""
35
- self.get_url = os.getenv("LLAMAPARSE_GET_URL") or ""
36
 
37
  if not all([self.api_key, self.post_url, self.get_url]):
38
  raise ValueError("Please set the environment variables for LlamaParse")
39
 
40
  self.headers = {
41
- "Accept": "application/json",
42
- "Authorization": f"Bearer {self.api_key}",
43
  }
44
 
45
- validate_json_save_path(save_path)
46
- self.save_path = save_path
47
- self.processed_data = load_json_file(save_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  def post_process(self, data):
 
50
  processed_dict = {}
51
  for input_key in data.keys():
52
  output_data = data[input_key]
@@ -58,9 +107,9 @@ class LlamaParseInference:
58
  id_counter = 0
59
  for elem in output_data["pages"]:
60
  for item in elem["items"]:
61
-
62
  coord = [[0, 0], [0, 0], [0, 0], [0, 0]]
63
  category = item["type"]
 
64
  if category == "table":
65
  transcription = markdown.markdown(
66
  item["md"],
@@ -70,8 +119,7 @@ class LlamaParseInference:
70
  else:
71
  transcription = item["value"]
72
  pts = item["bBox"]
73
- if "x" in pts and "y" in pts and \
74
- "w" in pts and "h" in pts:
75
  coord = [
76
  [pts["x"], pts["y"]],
77
  [pts["x"] + pts["w"], pts["y"]],
@@ -80,8 +128,8 @@ class LlamaParseInference:
80
  ]
81
 
82
  xy_coord = [{"x": x, "y": y} for x, y in coord]
83
-
84
  category = CATEGORY_MAP.get(category, "paragraph")
 
85
  data_dict = {
86
  "coordinates": xy_coord,
87
  "category": category,
@@ -93,107 +141,112 @@ class LlamaParseInference:
93
  }
94
  }
95
  processed_dict[input_key]["elements"].append(data_dict)
96
-
97
  id_counter += 1
98
 
99
- for key in self.processed_data:
100
- processed_dict[key] = self.processed_data[key]
101
-
102
- return processed_dict
103
-
104
- def infer(self, file_path):
105
- """Infer the layout of the documents in the given file path
106
- Args:
107
- file_path (str): the path to the file or directory containing the documents to process
108
- """
109
- paths = read_file_paths(file_path, supported_formats=self.formats)
110
-
111
- error_files = []
112
-
113
- result_dict = {}
114
- for filepath in paths:
115
- print("({}/{}) Processing {}".format(paths.index(filepath) + 1, len(paths), filepath))
116
-
117
- filename = filepath.name
118
- if filename in self.processed_data.keys():
119
- print(f"'{filename}' is already in the loaded dictionary. Skipping this sample")
120
- continue
121
-
122
- try:
123
- with open(filepath, "rb") as file_data:
124
- file_data = {
125
- "file": ("dummy.pdf", file_data, "")
126
- }
127
- data = {
128
- "invalidate_cache": True,
129
- "premium_mode": True,
130
- "disable_ocr": False
131
- }
132
- response = requests.post(
133
- self.post_url, headers=self.headers, files=file_data, data=data
134
- )
135
-
136
  result_data = response.json()
137
- status = result_data["status"]
138
- id_ = result_data["id"]
139
-
140
- while status == "PENDING":
141
- get_url = f"{self.get_url}/{id_}"
142
- response = requests.get(get_url, headers=self.headers)
143
-
144
- response_json = response.json()
145
- status = response_json["status"]
146
- if status == "SUCCESS":
147
- get_url = f"{self.get_url}/{id_}/result/json"
148
- response = requests.get(get_url, headers=self.headers)
149
- break
150
-
151
- time.sleep(1)
152
-
153
- result_dict[filename] = response.json()
154
- except Exception as e:
155
- print(e)
156
- print("Error processing document..")
157
- error_files.append(filepath)
158
- continue
159
-
160
- result_dict = self.post_process(result_dict)
161
-
162
- with open(self.save_path, "w") as f:
163
- json.dump(result_dict, f)
164
-
165
- for error_file in error_files:
166
- print(f"Error processing file: {error_file}")
167
-
168
- print("Finished processing all documents")
169
- print("Results saved to: {}".format(self.save_path))
170
- print("Number of errors: {}".format(len(error_files)))
 
 
 
 
 
 
 
 
 
 
 
171
 
172
 
173
  if __name__ == "__main__":
174
- args = argparse.ArgumentParser()
175
- args.add_argument(
176
- "--data_path",
177
- type=str, default="", required=True,
178
- help="Path containing the documents to process"
179
- )
180
- args.add_argument(
181
- "--save_path",
182
- type=str, default="", required=True,
183
- help="Path to save the results"
184
- )
185
- args.add_argument(
186
- "--input_formats",
187
- type=list, default=[
188
- ".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"
189
- ],
190
- help="Supported input file formats"
191
- )
192
- args = args.parse_args()
193
 
194
  llamaparse_inference = LlamaParseInference(
195
- args.save_path,
196
- input_formats=args.input_formats
 
 
 
 
 
 
 
197
  )
198
  llamaparse_inference.infer(args.data_path)
199
-
 
1
+ """
2
+ LlamaParse document layout inference.
3
+
4
+ Uses LlamaParse's Document Parse API.
5
+ """
6
+ import asyncio
7
  import os
8
  import time
9
+
10
  import markdown
11
  import requests
 
 
 
 
 
12
 
13
+ from base import HttpClientInference, create_argument_parser, parse_args_with_extra
14
 
15
  CATEGORY_MAP = {
16
  "text": "paragraph",
 
19
  }
20
 
21
 
22
+ class LlamaParseInference(HttpClientInference):
23
+ """LlamaParse document layout inference."""
24
+
25
  def __init__(
26
  self,
27
  save_path,
28
+ input_formats=None,
29
+ mode="cost-effective",
30
+ concurrent_limit=None,
31
+ sampling_rate=1.0,
32
+ request_timeout=600,
33
+ random_seed=None,
34
+ group_by_document=False,
35
+ file_ext_mapping=None
36
  ):
37
+ """Initialize the LlamaParseInference class.
38
+
39
  Args:
40
  save_path (str): the json path to save the results
41
  input_formats (list, optional): the supported file formats.
42
+ mode (str, optional): parsing mode - 'cost-effective', 'agentic', or 'agentic-plus'
43
+ concurrent_limit (int, optional): maximum number of concurrent API requests
44
+ sampling_rate (float, optional): fraction of files to process (0.0-1.0)
45
+ request_timeout (float, optional): timeout in seconds for API requests
46
+ random_seed (int, optional): random seed for reproducible sampling
47
+ group_by_document (bool, optional): group per-page results into document-level
48
+ file_ext_mapping (str or dict, optional): file extension mapping for grouping
49
  """
50
+ super().__init__(
51
+ save_path,
52
+ input_formats,
53
+ concurrent_limit,
54
+ sampling_rate,
55
+ request_timeout,
56
+ random_seed,
57
+ group_by_document,
58
+ file_ext_mapping
59
+ )
60
+
61
+ self.mode = mode
62
  self.api_key = os.getenv("LLAMAPARSE_API_KEY") or ""
63
+ self.post_url = os.getenv("LLAMAPARSE_POST_URL") or "https://api.cloud.llamaindex.ai/api/v1/parsing/upload"
64
+ self.get_url = os.getenv("LLAMAPARSE_GET_URL") or "https://api.cloud.llamaindex.ai/api/v1/parsing/job"
65
 
66
  if not all([self.api_key, self.post_url, self.get_url]):
67
  raise ValueError("Please set the environment variables for LlamaParse")
68
 
69
  self.headers = {
70
+ "Accept": "application/json",
71
+ "Authorization": f"Bearer {self.api_key}",
72
  }
73
 
74
+ def _get_parse_data(self):
75
+ """Get parse data configuration based on mode."""
76
+ data = {
77
+ "invalidate_cache": True,
78
+ "high_res_ocr": True,
79
+ "adaptive_long_table": True,
80
+ "outlined_table_extraction": True,
81
+ "output_tables_as_HTML": True,
82
+ }
83
+
84
+ if self.mode == "cost-effective":
85
+ data["parse_mode"] = "parse_page_with_llm"
86
+ elif self.mode == "agentic":
87
+ data["parse_mode"] = "parse_page_with_agent"
88
+ data["model"] = "openai-gpt-4-1-mini"
89
+ elif self.mode == "agentic-plus":
90
+ data["parse_mode"] = "parse_page_with_agent"
91
+ data["model"] = "anthropic-sonnet-4.0"
92
+ else:
93
+ raise ValueError(f"Unknown mode: {self.mode}. Choose from 'cost-effective', 'agentic', or 'agentic-plus'")
94
+
95
+ return data
96
 
97
  def post_process(self, data):
98
+ """Post-process LlamaParse API response to standard format."""
99
  processed_dict = {}
100
  for input_key in data.keys():
101
  output_data = data[input_key]
 
107
  id_counter = 0
108
  for elem in output_data["pages"]:
109
  for item in elem["items"]:
 
110
  coord = [[0, 0], [0, 0], [0, 0], [0, 0]]
111
  category = item["type"]
112
+
113
  if category == "table":
114
  transcription = markdown.markdown(
115
  item["md"],
 
119
  else:
120
  transcription = item["value"]
121
  pts = item["bBox"]
122
+ if "x" in pts and "y" in pts and "w" in pts and "h" in pts:
 
123
  coord = [
124
  [pts["x"], pts["y"]],
125
  [pts["x"] + pts["w"], pts["y"]],
 
128
  ]
129
 
130
  xy_coord = [{"x": x, "y": y} for x, y in coord]
 
131
  category = CATEGORY_MAP.get(category, "paragraph")
132
+
133
  data_dict = {
134
  "coordinates": xy_coord,
135
  "category": category,
 
141
  }
142
  }
143
  processed_dict[input_key]["elements"].append(data_dict)
 
144
  id_counter += 1
145
 
146
+ return self._merge_processed_data(processed_dict)
147
+
148
+ async def _call_api_async(self, filepath, client=None):
149
+ """Make the actual async API call for a file."""
150
+ filename = filepath.name
151
+
152
+ with open(filepath, "rb") as file_data:
153
+ files = {"file": ("dummy.pdf", file_data, "")}
154
+ data = self._get_parse_data()
155
+
156
+ response = await client.post(
157
+ self.post_url,
158
+ headers=self.headers,
159
+ files=files,
160
+ data=data,
161
+ timeout=60.0
162
+ )
163
+
164
+ result_data = response.json()
165
+
166
+ if "status" not in result_data:
167
+ raise Exception(f"Missing 'status' key in response: {result_data}")
168
+
169
+ status = result_data["status"]
170
+ id_ = result_data["id"]
171
+
172
+ # Poll for completion
173
+ while status == "PENDING":
174
+ get_url = f"{self.get_url}/{id_}"
175
+ response = await client.get(get_url, headers=self.headers, timeout=30.0)
176
+
177
+ response_json = response.json()
178
+ status = response_json.get("status", "UNKNOWN")
179
+
180
+ if status == "SUCCESS":
181
+ get_url = f"{self.get_url}/{id_}/result/json"
182
+ response = await client.get(get_url, headers=self.headers, timeout=30.0)
183
  result_data = response.json()
184
+ break
185
+
186
+ await asyncio.sleep(1)
187
+
188
+ if status != "SUCCESS":
189
+ raise Exception(f"{filename}: {result_data}")
190
+
191
+ return result_data
192
+
193
+ def _call_api_sync(self, filepath):
194
+ """Make the actual sync API call for a file."""
195
+ filename = filepath.name
196
+
197
+ with open(filepath, "rb") as file_data:
198
+ files = {"file": ("dummy.pdf", file_data, "")}
199
+ data = self._get_parse_data()
200
+
201
+ response = requests.post(
202
+ self.post_url, headers=self.headers, files=files, data=data
203
+ )
204
+
205
+ result = response.json()
206
+
207
+ if "status" not in result:
208
+ raise Exception(f"Missing 'status' key in response: {result}")
209
+
210
+ status = result["status"]
211
+ id_ = result["id"]
212
+
213
+ while status == "PENDING":
214
+ get_url = f"{self.get_url}/{id_}"
215
+ response = requests.get(get_url, headers=self.headers)
216
+
217
+ result = response.json()
218
+ status = result.get("status", "UNKNOWN")
219
+ time.sleep(1)
220
+
221
+ if status == "SUCCESS":
222
+ get_url = f"{self.get_url}/{id_}/result/json"
223
+ response = requests.get(get_url, headers=self.headers)
224
+ result = response.json()
225
+ else:
226
+ raise Exception(f"{filename}: {result}")
227
+
228
+ return result
229
 
230
 
231
  if __name__ == "__main__":
232
+ parser = create_argument_parser("LlamaParse document layout inference")
233
+ # Override --mode with LlamaParse-specific choices
234
+ for action in parser._actions:
235
+ if action.dest == 'mode':
236
+ action.choices = ["cost-effective", "agentic", "agentic-plus"]
237
+ action.help = "Parsing mode: 'cost-effective' (default), 'agentic', or 'agentic-plus'"
238
+ break
239
+ args = parse_args_with_extra(parser)
 
 
 
 
 
 
 
 
 
 
 
240
 
241
  llamaparse_inference = LlamaParseInference(
242
+ save_path=args.save_path,
243
+ input_formats=args.input_formats,
244
+ mode=args.mode or "cost-effective",
245
+ concurrent_limit=args.concurrent,
246
+ sampling_rate=args.sampling_rate,
247
+ request_timeout=args.request_timeout,
248
+ random_seed=args.random_seed,
249
+ group_by_document=args.group_by_document,
250
+ file_ext_mapping=args.file_ext_mapping
251
  )
252
  llamaparse_inference.infer(args.data_path)
 
scripts/infer_microsoft.py CHANGED
@@ -1,12 +1,15 @@
 
 
 
 
 
 
1
  import os
2
- import json
3
- import argparse
4
 
5
  from azure.ai.formrecognizer import DocumentAnalysisClient
6
  from azure.core.credentials import AzureKeyCredential
7
 
8
- from utils import read_file_paths, validate_json_save_path, load_json_file
9
-
10
 
11
  CATEGORY_MAP = {
12
  "Title": "heading1",
@@ -21,17 +24,43 @@ CATEGORY_MAP = {
21
  }
22
 
23
 
24
- class MicrosoftInference:
 
 
25
  def __init__(
26
  self,
27
  save_path,
28
- input_formats=[".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"]
 
 
 
 
 
 
29
  ):
30
- """Initialize the MicrosoftInference class
 
31
  Args:
32
  save_path (str): the json path to save the results
33
  input_formats (list, optional): the supported file formats.
 
 
 
 
 
 
34
  """
 
 
 
 
 
 
 
 
 
 
 
35
  MICROSOFT_API_KEY = os.getenv("MICROSOFT_API_KEY") or ""
36
  MICROSOFT_ENDPOINT = os.getenv("MICROSOFT_ENDPOINT") or ""
37
 
@@ -39,29 +68,21 @@ class MicrosoftInference:
39
  raise ValueError("Please set the environment variables for Microsoft")
40
 
41
  self.document_analysis_client = DocumentAnalysisClient(
42
- endpoint=MICROSOFT_ENDPOINT, credential=AzureKeyCredential(MICROSOFT_API_KEY)
 
43
  )
44
 
45
- validate_json_save_path(save_path)
46
- self.save_path = save_path
47
- self.processed_data = load_json_file(save_path)
48
-
49
- self.formats = input_formats
50
-
51
  def post_process(self, data):
 
52
  processed_dict = {}
53
  for input_key in data.keys():
54
  output_data = data[input_key]
55
 
56
- processed_dict[input_key] = {
57
- "elements": []
58
- }
59
 
60
  id_counter = 0
61
  for par_elem in output_data["paragraphs"]:
62
- category = par_elem["role"]
63
- category = CATEGORY_MAP.get(category, "paragraph")
64
-
65
  transcription = par_elem["content"]
66
  coord = [[pt["x"], pt["y"]] for pt in par_elem["bounding_regions"][0]["polygon"]]
67
  xy_coord = [{"x": x, "y": y} for x, y in coord]
@@ -70,14 +91,9 @@ class MicrosoftInference:
70
  "coordinates": xy_coord,
71
  "category": category,
72
  "id": id_counter,
73
- "content": {
74
- "text": transcription,
75
- "html": "",
76
- "markdown": ""
77
- }
78
  }
79
  processed_dict[input_key]["elements"].append(data_dict)
80
-
81
  id_counter += 1
82
 
83
  html_transcription = ""
@@ -85,16 +101,13 @@ class MicrosoftInference:
85
  coord = [[pt["x"], pt["y"]] for pt in table_elem["bounding_regions"][0]["polygon"]]
86
  xy_coord = [{"x": x, "y": y} for x, y in coord]
87
 
88
- category = "table"
89
-
90
  html_transcription += "<table>"
91
 
92
- # Create a matrix to represent the table
93
  table_matrix = [
94
- ["" for _ in range(table_elem["column_count"])] for _ in range(table_elem["row_count"])
 
95
  ]
96
 
97
- # Fill the matrix with table data
98
  for cell in table_elem["cells"]:
99
  row = cell["row_index"]
100
  col = cell["column_index"]
@@ -102,116 +115,64 @@ class MicrosoftInference:
102
  colspan = cell.get("column_span", 1)
103
  content = cell["content"]
104
 
105
- # Insert content into the matrix, handle rowspan and colspan
106
  for r in range(row, row + rowspan):
107
  for c in range(col, col + colspan):
108
  if r == row and c == col:
109
  table_matrix[r][c] = f"<td rowspan='{rowspan}' colspan='{colspan}'>{content}</td>"
110
  else:
111
- # Mark cells covered by rowspan or colspan
112
  table_matrix[r][c] = None
113
 
114
- # Generate HTML from the matrix
115
  for row in table_matrix:
116
  html_transcription += "<tr>"
117
  for cell in row:
118
  if cell is not None:
119
- html_transcription += f"{cell}"
120
  html_transcription += "</tr>"
121
 
122
  html_transcription += "</table>"
123
 
124
  data_dict = {
125
  "coordinates": xy_coord,
126
- "category": category,
127
  "id": id_counter,
128
- "content": {
129
- "text": "",
130
- "html": html_transcription,
131
- "markdown": ""
132
- }
133
  }
134
  processed_dict[input_key]["elements"].append(data_dict)
135
-
136
  id_counter += 1
137
 
138
- for key in self.processed_data:
139
- processed_dict[key] = self.processed_data[key]
140
-
141
- return processed_dict
142
-
143
 
144
- def infer(self, file_path):
145
- """Infer the layout of the documents in the given file path
146
- Args:
147
- file_path (str): the path to the file or directory containing the documents to process
148
- """
149
- paths = read_file_paths(file_path, supported_formats=self.formats)
150
-
151
- error_files = []
152
-
153
- result_dict = {}
154
- for idx, filepath in enumerate(paths):
155
- print("({}/{}) {}".format(idx+1, len(paths), filepath))
156
-
157
- filename = filepath.name
158
- if filename in self.processed_data.keys():
159
- print(f"'{filename}' is already in the loaded dictionary. Skipping this sample")
160
- continue
161
-
162
- input_data = open(filepath, "rb")
163
-
164
- try:
165
- poller = self.document_analysis_client.begin_analyze_document(
166
- "prebuilt-layout", document=input_data
167
- )
168
- result = poller.result()
169
 
170
- json_result = result.to_dict()
171
- except Exception as e:
172
- print(e)
173
- print("Error processing document..")
174
- error_files.append(filepath)
175
- continue
176
 
177
- result_dict[filename] = json_result
178
-
179
- result_dict = self.post_process(result_dict)
180
-
181
- with open(self.save_path, "w") as f:
182
- json.dump(result_dict, f)
183
-
184
- for error_file in error_files:
185
- print(f"Error processing file: {error_file}")
186
-
187
- print("Finished processing all documents")
188
- print("Results saved to: {}".format(self.save_path))
189
- print("Number of errors: {}".format(len(error_files)))
190
 
191
 
192
  if __name__ == "__main__":
193
- args = argparse.ArgumentParser()
194
- args.add_argument(
195
- "--data_path",
196
- type=str, default="", required=True,
197
- help="Path containing the documents to process"
198
- )
199
- args.add_argument(
200
- "--save_path",
201
- type=str, default="", required=True,
202
- help="Path to save the results"
203
- )
204
- args.add_argument(
205
- "--input_formats",
206
- type=list, default=[
207
- ".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"
208
- ],
209
- help="Supported input file formats"
210
- )
211
- args = args.parse_args()
212
 
213
  microsoft_inference = MicrosoftInference(
214
  args.save_path,
215
- input_formats=args.input_formats
 
 
 
 
 
 
216
  )
217
  microsoft_inference.infer(args.data_path)
 
1
+ """
2
+ Microsoft Azure Document Intelligence layout inference.
3
+
4
+ Uses Azure Document Intelligence (Form Recognizer) API for document analysis.
5
+ """
6
+ import asyncio
7
  import os
 
 
8
 
9
  from azure.ai.formrecognizer import DocumentAnalysisClient
10
  from azure.core.credentials import AzureKeyCredential
11
 
12
+ from base import BaseInference, create_argument_parser, parse_args_with_extra
 
13
 
14
  CATEGORY_MAP = {
15
  "Title": "heading1",
 
24
  }
25
 
26
 
27
+ class MicrosoftInference(BaseInference):
28
+ """Microsoft Azure Document Intelligence layout inference."""
29
+
30
  def __init__(
31
  self,
32
  save_path,
33
+ input_formats=None,
34
+ concurrent_limit=None,
35
+ sampling_rate=1.0,
36
+ request_timeout=600,
37
+ random_seed=None,
38
+ group_by_document=False,
39
+ file_ext_mapping=None
40
  ):
41
+ """Initialize the MicrosoftInference class.
42
+
43
  Args:
44
  save_path (str): the json path to save the results
45
  input_formats (list, optional): the supported file formats.
46
+ concurrent_limit (int, optional): maximum number of concurrent API requests
47
+ sampling_rate (float, optional): fraction of files to process (0.0-1.0)
48
+ request_timeout (float, optional): timeout in seconds for API requests
49
+ random_seed (int, optional): random seed for reproducible sampling
50
+ group_by_document (bool, optional): group per-page results into document-level
51
+ file_ext_mapping (str or dict, optional): file extension mapping for grouping
52
  """
53
+ super().__init__(
54
+ save_path,
55
+ input_formats,
56
+ concurrent_limit,
57
+ sampling_rate,
58
+ request_timeout,
59
+ random_seed,
60
+ group_by_document,
61
+ file_ext_mapping
62
+ )
63
+
64
  MICROSOFT_API_KEY = os.getenv("MICROSOFT_API_KEY") or ""
65
  MICROSOFT_ENDPOINT = os.getenv("MICROSOFT_ENDPOINT") or ""
66
 
 
68
  raise ValueError("Please set the environment variables for Microsoft")
69
 
70
  self.document_analysis_client = DocumentAnalysisClient(
71
+ endpoint=MICROSOFT_ENDPOINT,
72
+ credential=AzureKeyCredential(MICROSOFT_API_KEY)
73
  )
74
 
 
 
 
 
 
 
75
  def post_process(self, data):
76
+ """Post-process Microsoft Document Intelligence API response to standard format."""
77
  processed_dict = {}
78
  for input_key in data.keys():
79
  output_data = data[input_key]
80
 
81
+ processed_dict[input_key] = {"elements": []}
 
 
82
 
83
  id_counter = 0
84
  for par_elem in output_data["paragraphs"]:
85
+ category = CATEGORY_MAP.get(par_elem["role"], "paragraph")
 
 
86
  transcription = par_elem["content"]
87
  coord = [[pt["x"], pt["y"]] for pt in par_elem["bounding_regions"][0]["polygon"]]
88
  xy_coord = [{"x": x, "y": y} for x, y in coord]
 
91
  "coordinates": xy_coord,
92
  "category": category,
93
  "id": id_counter,
94
+ "content": {"text": transcription, "html": "", "markdown": ""}
 
 
 
 
95
  }
96
  processed_dict[input_key]["elements"].append(data_dict)
 
97
  id_counter += 1
98
 
99
  html_transcription = ""
 
101
  coord = [[pt["x"], pt["y"]] for pt in table_elem["bounding_regions"][0]["polygon"]]
102
  xy_coord = [{"x": x, "y": y} for x, y in coord]
103
 
 
 
104
  html_transcription += "<table>"
105
 
 
106
  table_matrix = [
107
+ ["" for _ in range(table_elem["column_count"])]
108
+ for _ in range(table_elem["row_count"])
109
  ]
110
 
 
111
  for cell in table_elem["cells"]:
112
  row = cell["row_index"]
113
  col = cell["column_index"]
 
115
  colspan = cell.get("column_span", 1)
116
  content = cell["content"]
117
 
 
118
  for r in range(row, row + rowspan):
119
  for c in range(col, col + colspan):
120
  if r == row and c == col:
121
  table_matrix[r][c] = f"<td rowspan='{rowspan}' colspan='{colspan}'>{content}</td>"
122
  else:
 
123
  table_matrix[r][c] = None
124
 
 
125
  for row in table_matrix:
126
  html_transcription += "<tr>"
127
  for cell in row:
128
  if cell is not None:
129
+ html_transcription += cell
130
  html_transcription += "</tr>"
131
 
132
  html_transcription += "</table>"
133
 
134
  data_dict = {
135
  "coordinates": xy_coord,
136
+ "category": "table",
137
  "id": id_counter,
138
+ "content": {"text": "", "html": html_transcription, "markdown": ""}
 
 
 
 
139
  }
140
  processed_dict[input_key]["elements"].append(data_dict)
 
141
  id_counter += 1
142
 
143
+ return self._merge_processed_data(processed_dict)
 
 
 
 
144
 
145
+ def _analyze_document(self, filepath):
146
+ """Analyze document using Azure Document Intelligence."""
147
+ with open(filepath, "rb") as input_data:
148
+ poller = self.document_analysis_client.begin_analyze_document(
149
+ "prebuilt-layout", document=input_data
150
+ )
151
+ result = poller.result()
152
+ return result.to_dict()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
 
154
+ async def _call_api_async(self, filepath, *args, **kwargs):
155
+ """Make the actual async API call for a file."""
156
+ loop = asyncio.get_event_loop()
157
+ return await loop.run_in_executor(None, self._analyze_document, filepath)
 
 
158
 
159
+ def _call_api_sync(self, filepath, *args, **kwargs):
160
+ """Make the actual sync API call for a file."""
161
+ return self._analyze_document(filepath)
 
 
 
 
 
 
 
 
 
 
162
 
163
 
164
  if __name__ == "__main__":
165
+ parser = create_argument_parser("Microsoft Azure Document Intelligence layout inference")
166
+ args = parse_args_with_extra(parser)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
167
 
168
  microsoft_inference = MicrosoftInference(
169
  args.save_path,
170
+ input_formats=args.input_formats,
171
+ concurrent_limit=args.concurrent,
172
+ sampling_rate=args.sampling_rate,
173
+ request_timeout=args.request_timeout,
174
+ random_seed=args.random_seed,
175
+ group_by_document=args.group_by_document,
176
+ file_ext_mapping=args.file_ext_mapping
177
  )
178
  microsoft_inference.infer(args.data_path)
scripts/infer_unstructured.py CHANGED
@@ -1,14 +1,15 @@
 
 
 
 
 
 
1
  import os
2
- import time
3
- import json
4
- import argparse
5
- from pathlib import Path
6
 
7
  import unstructured_client
8
  from unstructured_client.models import operations, shared
9
 
10
- from utils import read_file_paths, validate_json_save_path, load_json_file
11
-
12
 
13
  CATEGORY_MAP = {
14
  "NarrativeText": "paragraph",
@@ -28,18 +29,42 @@ CATEGORY_MAP = {
28
  }
29
 
30
 
31
- class UnstructuredInference:
 
 
32
  def __init__(
33
  self,
34
  save_path,
35
- input_formats=[".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"]
 
 
 
 
 
 
36
  ):
37
- """Initialize the UnstructuredInference class
 
38
  Args:
39
  save_path (str): the json path to save the results
40
  input_formats (list, optional): the supported file formats.
 
 
 
 
 
 
41
  """
42
- self.formats = input_formats
 
 
 
 
 
 
 
 
 
43
 
44
  self.api_key = os.getenv("UNSTRUCTURED_API_KEY") or ""
45
  self.url = os.getenv("UNSTRUCTURED_URL") or ""
@@ -51,29 +76,28 @@ class UnstructuredInference:
51
  self.get_coordinates = True
52
  self.infer_table_structure = True
53
 
54
- # create save basepath
55
- validate_json_save_path(save_path)
56
- self.save_path = save_path
57
- self.processed_data = load_json_file(save_path)
58
-
59
  self.client = unstructured_client.UnstructuredClient(
60
  api_key_auth=self.api_key,
61
  server_url=self.url,
62
  )
63
 
64
  def post_process(self, data):
 
65
  processed_dict = {}
66
  for input_key in data.keys():
67
  output_data = data[input_key]
68
 
69
- processed_dict[input_key] = {
70
- "elements": []
71
- }
 
 
72
 
73
  id_counter = 0
74
  for elem in output_data:
75
  transcription = elem["text"]
76
  category = CATEGORY_MAP.get(elem["type"], "paragraph")
 
77
  if elem["metadata"]["coordinates"] is None:
78
  continue
79
 
@@ -93,96 +117,50 @@ class UnstructuredInference:
93
  }
94
  }
95
  processed_dict[input_key]["elements"].append(data_dict)
96
-
97
  id_counter += 1
98
 
99
- for key in self.processed_data:
100
- processed_dict[key] = self.processed_data[key]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
- return processed_dict
 
103
 
104
- def infer(self, file_path):
105
- """Infer the layout of the documents in the given file path
106
- Args:
107
- file_path (str): the path to the file or directory containing the documents to process
108
- """
109
- paths = read_file_paths(file_path, supported_formats=self.formats)
110
-
111
- error_files = []
112
-
113
- result_dict = {}
114
- for filepath in paths:
115
- print("({}/{}) Processing {}".format(paths.index(filepath) + 1, len(paths), filepath))
116
- filename = filepath.name
117
- if filename in self.processed_data.keys():
118
- print(f"'{filename}' is already in the loaded dictionary. Skipping this sample")
119
- continue
120
-
121
- with open(filepath, "rb") as f:
122
- data = f.read()
123
-
124
- req = operations.PartitionRequest(
125
- partition_parameters=shared.PartitionParameters(
126
- files=shared.Files(
127
- content=data,
128
- file_name=str(filepath),
129
- ),
130
- # --- Other partition parameters ---
131
- strategy=shared.Strategy.HI_RES,
132
- pdf_infer_table_structure=self.infer_table_structure,
133
- coordinates=self.get_coordinates,
134
- languages=self.languages,
135
- ),
136
- )
137
-
138
- try:
139
- res = self.client.general.partition(request=req)
140
- elements = res.elements
141
- except Exception as e:
142
- print(e)
143
- print("Error processing document..")
144
- error_files.append(filepath)
145
- continue
146
-
147
- result_dict[filename] = elements
148
-
149
- result_dict = self.post_process(result_dict)
150
-
151
- with open(self.save_path, "w") as f:
152
- json.dump(result_dict, f)
153
-
154
- for error_file in error_files:
155
- print(f"Error processing file: {error_file}")
156
-
157
- print("Finished processing all documents")
158
- print("Results saved to: {}".format(self.save_path))
159
- print("Number of errors: {}".format(len(error_files)))
160
 
161
 
162
  if __name__ == "__main__":
163
- args = argparse.ArgumentParser()
164
- args.add_argument(
165
- "--data_path",
166
- type=str, default="", required=True,
167
- help="Path containing the documents to process"
168
- )
169
- args.add_argument(
170
- "--save_path",
171
- type=str, default="", required=True,
172
- help="Path to save the results"
173
- )
174
- args.add_argument(
175
- "--input_formats",
176
- type=list, default=[
177
- ".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"
178
- ],
179
- help="Supported input file formats"
180
- )
181
- args = args.parse_args()
182
 
183
  unstructured_inference = UnstructuredInference(
184
  args.save_path,
185
- input_formats=args.input_formats
 
 
 
 
 
 
186
  )
187
  unstructured_inference.infer(args.data_path)
188
-
 
1
+ """
2
+ Unstructured document layout inference.
3
+
4
+ Uses Unstructured API for document analysis.
5
+ """
6
+ import asyncio
7
  import os
 
 
 
 
8
 
9
  import unstructured_client
10
  from unstructured_client.models import operations, shared
11
 
12
+ from base import BaseInference, create_argument_parser, parse_args_with_extra
 
13
 
14
  CATEGORY_MAP = {
15
  "NarrativeText": "paragraph",
 
29
  }
30
 
31
 
32
+ class UnstructuredInference(BaseInference):
33
+ """Unstructured document layout inference."""
34
+
35
  def __init__(
36
  self,
37
  save_path,
38
+ input_formats=None,
39
+ concurrent_limit=None,
40
+ sampling_rate=1.0,
41
+ request_timeout=600,
42
+ random_seed=None,
43
+ group_by_document=False,
44
+ file_ext_mapping=None
45
  ):
46
+ """Initialize the UnstructuredInference class.
47
+
48
  Args:
49
  save_path (str): the json path to save the results
50
  input_formats (list, optional): the supported file formats.
51
+ concurrent_limit (int, optional): maximum number of concurrent API requests
52
+ sampling_rate (float, optional): fraction of files to process (0.0-1.0)
53
+ request_timeout (float, optional): timeout in seconds for API requests
54
+ random_seed (int, optional): random seed for reproducible sampling
55
+ group_by_document (bool, optional): group per-page results into document-level
56
+ file_ext_mapping (str or dict, optional): file extension mapping for grouping
57
  """
58
+ super().__init__(
59
+ save_path,
60
+ input_formats,
61
+ concurrent_limit,
62
+ sampling_rate,
63
+ request_timeout,
64
+ random_seed,
65
+ group_by_document,
66
+ file_ext_mapping
67
+ )
68
 
69
  self.api_key = os.getenv("UNSTRUCTURED_API_KEY") or ""
70
  self.url = os.getenv("UNSTRUCTURED_URL") or ""
 
76
  self.get_coordinates = True
77
  self.infer_table_structure = True
78
 
 
 
 
 
 
79
  self.client = unstructured_client.UnstructuredClient(
80
  api_key_auth=self.api_key,
81
  server_url=self.url,
82
  )
83
 
84
  def post_process(self, data):
85
+ """Post-process Unstructured API response to standard format."""
86
  processed_dict = {}
87
  for input_key in data.keys():
88
  output_data = data[input_key]
89
 
90
+ # Handle wrapped structure from interim results
91
+ if isinstance(output_data, dict) and "result" in output_data:
92
+ output_data = output_data["result"]
93
+
94
+ processed_dict[input_key] = {"elements": []}
95
 
96
  id_counter = 0
97
  for elem in output_data:
98
  transcription = elem["text"]
99
  category = CATEGORY_MAP.get(elem["type"], "paragraph")
100
+
101
  if elem["metadata"]["coordinates"] is None:
102
  continue
103
 
 
117
  }
118
  }
119
  processed_dict[input_key]["elements"].append(data_dict)
 
120
  id_counter += 1
121
 
122
+ return self._merge_processed_data(processed_dict)
123
+
124
+ def _partition_document(self, filepath):
125
+ """Partition document using Unstructured API."""
126
+ with open(filepath, "rb") as f:
127
+ data = f.read()
128
+
129
+ req = operations.PartitionRequest(
130
+ partition_parameters=shared.PartitionParameters(
131
+ files=shared.Files(content=data, file_name=str(filepath)),
132
+ strategy=shared.Strategy.HI_RES,
133
+ pdf_infer_table_structure=self.infer_table_structure,
134
+ coordinates=self.get_coordinates,
135
+ languages=self.languages,
136
+ ),
137
+ )
138
 
139
+ res = self.client.general.partition(request=req)
140
+ return res.elements
141
 
142
+ async def _call_api_async(self, filepath, *args, **kwargs):
143
+ """Make the actual async API call for a file."""
144
+ loop = asyncio.get_event_loop()
145
+ return await loop.run_in_executor(None, self._partition_document, filepath)
146
+
147
+ def _call_api_sync(self, filepath, *args, **kwargs):
148
+ """Make the actual sync API call for a file."""
149
+ return self._partition_document(filepath)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
150
 
151
 
152
  if __name__ == "__main__":
153
+ parser = create_argument_parser("Unstructured document layout inference")
154
+ args = parse_args_with_extra(parser)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
155
 
156
  unstructured_inference = UnstructuredInference(
157
  args.save_path,
158
+ input_formats=args.input_formats,
159
+ concurrent_limit=args.concurrent,
160
+ sampling_rate=args.sampling_rate,
161
+ request_timeout=args.request_timeout,
162
+ random_seed=args.random_seed,
163
+ group_by_document=args.group_by_document,
164
+ file_ext_mapping=args.file_ext_mapping
165
  )
166
  unstructured_inference.infer(args.data_path)
 
scripts/infer_upstage.py CHANGED
@@ -1,28 +1,65 @@
 
 
 
 
 
1
  import os
2
- import sys
3
- import json
4
  import requests
5
- import argparse
6
 
7
- from pathlib import Path
8
- from utils import read_file_paths, validate_json_save_path, load_json_file
9
 
10
 
11
- class UpstageInference:
 
 
12
  def __init__(
13
  self,
14
  save_path,
15
- input_formats=[".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"],
16
- output_formats=["text", "html", "markdown"],
17
- model_name="document-parse-240910",
 
 
 
 
 
 
 
 
 
 
 
18
  ):
19
- """Initialize the UpstageInference class
 
20
  Args:
21
  save_path (str): the json path to save the results
22
  input_formats (list, optional): the supported input file formats.
23
  output_formats (list, optional): the supported output formats.
24
- model_name (str, optional): the model name. Defaults to "document-parse-240910".
 
 
 
 
 
 
 
 
 
 
 
25
  """
 
 
 
 
 
 
 
 
 
 
26
 
27
  self.endpoint = os.getenv("UPSTAGE_ENDPOINT", "")
28
  self.api_key = os.getenv("UPSTAGE_API_KEY", "")
@@ -30,108 +67,151 @@ class UpstageInference:
30
  if not all([self.endpoint, self.api_key]):
31
  raise ValueError("Please set the environment variables for Upstage")
32
 
33
- validate_json_save_path(save_path)
34
- self.save_path = save_path
35
- self.processed_data = load_json_file(save_path)
36
-
37
- self.input_formats = input_formats
38
  self.output_formats = output_formats
39
 
40
  self.headers = {
41
  "Authorization": f"Bearer {self.api_key}",
 
42
  }
43
 
44
  self.data = {
45
- "ocr": "force",
46
- "model": model_name,
47
- "output_formats": f"{self.output_formats}"
48
  }
49
-
50
- def infer(self, file_path) -> None:
51
- """Infer the layout of the documents in the given file path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  Args:
53
- file_path (str): the path to the file or directory containing the documents to process
 
 
 
 
54
  """
55
-
56
- paths = read_file_paths(file_path, self.input_formats)
57
-
58
- error_files = []
59
-
60
- result_dict = {}
61
- for idx, filepath in enumerate(paths):
62
- print("({}/{}) {}".format(idx+1, len(paths), filepath))
63
-
64
- filename = Path(filepath).name
65
- if filename in self.processed_data.keys():
66
- print(f"'{filename}' is already in the loaded dictionary. Skipping this sample")
67
- continue
68
-
69
- files = {
70
- "document": open(filepath, "rb"),
71
- }
72
-
73
- try:
74
- # The API does not support files exceeding 50MB
75
- # or containing more than 100 pages.
76
- response = requests.post(
77
- self.endpoint,
78
- headers=self.headers,
79
- files=files,
80
- data=self.data
81
- )
82
- json_result = response.json()
83
-
84
- result_dict[filename] = json_result
85
-
86
- except Exception as e:
87
- print(e)
88
- print("Error processing document..")
89
- error_files.append(filepath)
90
- continue
91
-
92
- for key in self.processed_data:
93
- result_dict[key] = self.processed_data[key]
94
-
95
- with open(self.save_path, "w", encoding="utf-8") as f:
96
- json.dump(result_dict, f, ensure_ascii=False, indent=4)
97
-
98
- for error_file in error_files:
99
- print(f"Error processing file: {error_file}")
100
-
101
- print("Finished processing all documents")
102
- print("Results saved to: {}".format(self.save_path))
103
- print("Number of errors: {}".format(len(error_files)))
 
 
 
 
 
104
 
105
 
106
  if __name__ == "__main__":
107
- args = argparse.ArgumentParser()
108
- args.add_argument(
109
- "--data_path",
110
- type=str, default="", required=True,
111
- help="Path containing the documents to process"
 
112
  )
113
- args.add_argument(
114
- "--save_path",
115
- type=str, default="", required=True,
116
- help="Path to save the results"
117
  )
118
- args.add_argument(
119
- "--input_formats",
120
- type=list, default=[
121
- ".pdf", ".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".heic"
122
- ],
123
- help="Supported input file formats"
124
  )
125
- args.add_argument(
126
- "--output_formats",
127
- type=list, default=["text", "html", "markdown"],
128
- help="Output formats supported by the API"
 
 
 
 
 
129
  )
130
- args = args.parse_args()
 
 
 
 
 
 
131
 
132
  upstage_inference = UpstageInference(
133
  args.save_path,
134
  input_formats=args.input_formats,
135
- output_formats=args.output_formats
 
 
 
 
 
 
 
 
 
 
 
 
136
  )
137
  upstage_inference.infer(args.data_path)
 
1
+ """
2
+ Upstage document layout inference.
3
+
4
+ Uses Upstage's Document Parse API.
5
+ """
6
  import os
7
+
 
8
  import requests
 
9
 
10
+ from base import HttpClientInference, create_argument_parser, parse_args_with_extra
 
11
 
12
 
13
+ class UpstageInference(HttpClientInference):
14
+ """Upstage document layout inference."""
15
+
16
  def __init__(
17
  self,
18
  save_path,
19
+ input_formats=None,
20
+ output_formats=None,
21
+ model=None,
22
+ mode=None,
23
+ concurrent_limit=None,
24
+ sampling_rate=1.0,
25
+ request_timeout=600,
26
+ random_seed=None,
27
+ group_by_document=False,
28
+ file_ext_mapping=None,
29
+ ocr=None,
30
+ dpi=None,
31
+ jpeg_quality=None,
32
+ schema=None,
33
  ):
34
+ """Initialize the UpstageInference class.
35
+
36
  Args:
37
  save_path (str): the json path to save the results
38
  input_formats (list, optional): the supported input file formats.
39
  output_formats (list, optional): the supported output formats.
40
+ model (str, optional): the model name. Defaults to "document-parse-nightly".
41
+ mode (str, optional): the parsing mode. Either "standard", "enhanced", or "auto".
42
+ concurrent_limit (int, optional): maximum number of concurrent API requests
43
+ sampling_rate (float, optional): fraction of files to process (0.0-1.0)
44
+ request_timeout (float, optional): timeout in seconds for API requests
45
+ random_seed (int, optional): random seed for reproducible sampling
46
+ group_by_document (bool, optional): group per-page results into document-level
47
+ file_ext_mapping (str or dict, optional): file extension mapping for grouping
48
+ ocr (str, optional): OCR option (e.g. "auto", "force")
49
+ dpi (int, optional): DPI value for the request
50
+ jpeg_quality (int, optional): JPEG quality value for the request
51
+ schema (str, optional): output schema (e.g. "ufso")
52
  """
53
+ super().__init__(
54
+ save_path,
55
+ input_formats,
56
+ concurrent_limit,
57
+ sampling_rate,
58
+ request_timeout,
59
+ random_seed,
60
+ group_by_document,
61
+ file_ext_mapping
62
+ )
63
 
64
  self.endpoint = os.getenv("UPSTAGE_ENDPOINT", "")
65
  self.api_key = os.getenv("UPSTAGE_API_KEY", "")
 
67
  if not all([self.endpoint, self.api_key]):
68
  raise ValueError("Please set the environment variables for Upstage")
69
 
70
+ if output_formats is None:
71
+ output_formats = ["text", "html", "markdown"]
 
 
 
72
  self.output_formats = output_formats
73
 
74
  self.headers = {
75
  "Authorization": f"Bearer {self.api_key}",
76
+ "X-Upstage-Use-Cache": "false",
77
  }
78
 
79
  self.data = {
80
+ "ocr": ocr or "force",
81
+ "output_formats": f"{self.output_formats}",
82
+ "mode": mode or "standard"
83
  }
84
+ # Only include model if explicitly provided (hosted API requires it, local doesn't)
85
+ if model:
86
+ self.data["model"] = model
87
+ if dpi is not None:
88
+ self.data["dpi"] = str(dpi)
89
+ if jpeg_quality is not None:
90
+ self.data["jpeg_quality"] = str(jpeg_quality)
91
+ if schema:
92
+ self.data["schema"] = schema
93
+
94
+ def post_process(self, data):
95
+ """Post-process is not needed for Upstage - it returns data directly."""
96
+ return self._merge_processed_data(data)
97
+
98
+ async def _call_api_async(self, filepath, client=None):
99
+ """Make the actual async API call for a file.
100
+
101
  Args:
102
+ filepath: Path object to the file
103
+ client: httpx.AsyncClient instance
104
+
105
+ Returns:
106
+ dict: Raw API response data
107
  """
108
+ filename = filepath.name
109
+
110
+ with open(filepath, "rb") as f:
111
+ file_data = f.read()
112
+
113
+ files = {
114
+ "document": (filename, file_data, "application/pdf")
115
+ }
116
+
117
+ response = await client.post(
118
+ self.endpoint,
119
+ headers=self.headers,
120
+ files=files,
121
+ data=self.data,
122
+ timeout=300.0
123
+ )
124
+
125
+ if response.status_code != 200:
126
+ error_result = response.json() if response.headers.get("content-type", "").startswith("application/json") else {"error": response.text}
127
+ raise Exception(f"HTTP {response.status_code}: {error_result}")
128
+
129
+ return response.json()
130
+
131
+ def _call_api_sync(self, filepath):
132
+ """Make the actual sync API call for a file.
133
+
134
+ Args:
135
+ filepath: Path object to the file
136
+
137
+ Returns:
138
+ dict: Raw API response data
139
+ """
140
+ filename = filepath.name
141
+
142
+ files = {
143
+ "document": open(filepath, "rb"),
144
+ }
145
+
146
+ try:
147
+ response = requests.post(
148
+ self.endpoint,
149
+ headers=self.headers,
150
+ files=files,
151
+ data=self.data,
152
+ timeout=300.0
153
+ )
154
+
155
+ if response.status_code != 200:
156
+ error_result = response.json() if response.headers.get("content-type", "").startswith("application/json") else {"error": response.text}
157
+ raise Exception(f"HTTP {response.status_code}: {error_result}")
158
+
159
+ return response.json()
160
+ finally:
161
+ files["document"].close()
162
 
163
 
164
  if __name__ == "__main__":
165
+ parser = create_argument_parser("Upstage document layout inference")
166
+ parser.add_argument(
167
+ "--output-formats",
168
+ type=str, nargs='+',
169
+ default=["text", "html", "markdown"],
170
+ help="Output formats supported by the API"
171
  )
172
+ parser.add_argument(
173
+ "--ocr",
174
+ type=str, default=None,
175
+ help="OCR option (e.g. 'auto', 'force'). Default: force"
176
  )
177
+ parser.add_argument(
178
+ "--dpi",
179
+ type=int, default=None,
180
+ help="DPI value for the request"
 
 
181
  )
182
+ parser.add_argument(
183
+ "--jpeg-quality",
184
+ type=int, default=None,
185
+ help="JPEG quality value for the request"
186
+ )
187
+ parser.add_argument(
188
+ "--schema",
189
+ type=str, default=None,
190
+ help="Output schema (e.g. 'ufso')"
191
  )
192
+ # Override --mode with Upstage-specific choices
193
+ for action in parser._actions:
194
+ if action.dest == 'mode':
195
+ action.choices = ["standard", "enhanced", "auto"]
196
+ action.help = "Parsing mode: 'standard' (default), 'enhanced', or 'auto'"
197
+ break
198
+ args = parse_args_with_extra(parser)
199
 
200
  upstage_inference = UpstageInference(
201
  args.save_path,
202
  input_formats=args.input_formats,
203
+ output_formats=args.output_formats,
204
+ model=args.model,
205
+ mode=args.mode,
206
+ concurrent_limit=args.concurrent,
207
+ sampling_rate=args.sampling_rate,
208
+ request_timeout=args.request_timeout,
209
+ random_seed=args.random_seed,
210
+ group_by_document=args.group_by_document,
211
+ file_ext_mapping=args.file_ext_mapping,
212
+ ocr=args.ocr,
213
+ dpi=args.dpi,
214
+ jpeg_quality=args.jpeg_quality,
215
+ schema=args.schema,
216
  )
217
  upstage_inference.infer(args.data_path)
scripts/utils.py CHANGED
@@ -33,6 +33,9 @@ def read_file_paths(path: str, supported_formats: List[str] = [".jpg"]) -> List[
33
  else:
34
  file_paths = []
35
 
 
 
 
36
  return file_paths
37
 
38
 
@@ -62,3 +65,49 @@ def load_json_file(path: str) -> dict:
62
  else:
63
  # If the file does not exist, return an empty dictionary
64
  return {}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  else:
34
  file_paths = []
35
 
36
+ # sort by filename for deterministic ordering
37
+ file_paths.sort(key=lambda f: f.name)
38
+
39
  return file_paths
40
 
41
 
 
65
  else:
66
  # If the file does not exist, return an empty dictionary
67
  return {}
68
+
69
+
70
+ def get_interim_dir_path(save_path: str) -> Path:
71
+ """Get the interim directory path based on the output filename"""
72
+ save_path_obj = Path(save_path)
73
+ interim_dir_name = save_path_obj.stem # Get filename without extension
74
+ return save_path_obj.parent / interim_dir_name
75
+
76
+
77
+ def get_interim_file_path(interim_dir: Path, filename: str) -> Path:
78
+ """Get the interim file path for a specific document"""
79
+ safe_filename = filename.replace('/', '_').replace('\\', '_') # Handle path separators
80
+ return interim_dir / f"{safe_filename}.json"
81
+
82
+
83
+ def save_interim_result(interim_dir: Path, filename: str, result: dict) -> None:
84
+ """Save individual API result to interim JSON file"""
85
+ interim_path = get_interim_file_path(interim_dir, filename)
86
+ with open(interim_path, "w", encoding="utf-8") as f:
87
+ json.dump(result, f, ensure_ascii=False, indent=4)
88
+
89
+
90
+ def load_interim_result(interim_dir: Path, filename: str) -> dict:
91
+ """Load individual API result from interim JSON file"""
92
+ interim_path = get_interim_file_path(interim_dir, filename)
93
+ if interim_path.exists():
94
+ with open(interim_path, "r", encoding="utf-8") as f:
95
+ return json.load(f)
96
+ return None
97
+
98
+
99
+ def collect_all_interim_results(interim_dir: Path) -> dict:
100
+ """Collect all interim results from the interim directory"""
101
+ result_dict = {}
102
+
103
+ # Load all interim results
104
+ if interim_dir.exists():
105
+ for interim_file in interim_dir.glob("*.json"):
106
+ filename = interim_file.stem # Remove .json extension
107
+ try:
108
+ with open(interim_file, "r", encoding="utf-8") as f:
109
+ result_dict[filename] = json.load(f)
110
+ except Exception as e:
111
+ print(f"Error loading interim file {interim_file}: {e}")
112
+
113
+ return result_dict
src/geometry.py ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ from shapely.geometry import Polygon
2
+ from shapely import affinity
3
+
4
+
5
+ def is_included(points_0, points_1, soft=0.2):
6
+ polygon_0 = Polygon(points_0)
7
+ polygon_1 = Polygon(points_1)
8
+ polygon_1 = affinity.scale(polygon_1, xfact=1. - soft, yfact=1. - soft)
9
+ return polygon_0.contains(polygon_1)
src/layout_evaluation.py CHANGED
@@ -1,4 +1,16 @@
1
  from rapidfuzz import fuzz
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  def calc_nid(
4
  gt_text : list,
@@ -27,68 +39,111 @@ def calc_nid(
27
 
28
 
29
  def extract_text(
30
- data : dict,
 
31
  ignore_classes : list = [],
32
  strings_to_remove : list = ["\n"],
33
- ) -> str:
34
- """Extract text from the dictionary data.
 
 
35
 
36
  Args:
37
- data (dict): The data to extract text from.
 
38
  ignore_classes (list): A list of classes to ignore during extraction.
39
  strings_to_remove (list): A list of strings to remove from the extracted text.
 
 
40
 
41
  Returns:
42
- str: The concatenated text extracted from the data.
43
  """
44
 
45
  ignore_classes = [x.lower() for x in ignore_classes]
46
 
47
- concatenated_text = ""
48
-
49
- for elem in data["elements"]:
 
 
 
 
 
 
 
 
50
  if elem["category"].lower() in ignore_classes:
51
  continue
52
-
53
- concatenated_text += elem["content"]["text"] + ' '
54
-
55
- # remove unwanted strings
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  for string in strings_to_remove:
57
- concatenated_text = concatenated_text.replace(string, '')
 
58
 
59
- return concatenated_text
60
 
61
 
62
  def evaluate_layout(
63
  gt : dict,
64
  pred : dict,
65
  ignore_classes : list = [],
66
- ) -> float:
 
67
  """Evaluate the layout of the gt against the pred.
68
 
69
  Args:
70
  gt (dict): The gt layout to evaluate.
71
  pred (dict): The pred layout to evaluate against.
72
  ignore_classes (list): A list of classes to ignore during evaluation.
 
 
73
 
74
  Returns:
75
- float: The layout evaluation score.
76
  """
77
  scores = []
 
 
78
  for image_key in gt.keys():
79
  gt_data = gt.get(image_key)
80
  pred_data = pred.get(image_key)
81
 
82
- gt_text = extract_text(gt_data, ignore_classes)
83
- pred_text = extract_text(pred_data, ignore_classes)
84
 
85
  score = calc_nid(gt_text, pred_text)
86
 
87
  scores.append(score)
 
 
 
88
 
89
  if len(scores) > 0:
90
  avg_score = sum(scores) / (len(scores) * 100)
91
  else:
92
  avg_score = 0
93
 
94
- return avg_score
 
1
  from rapidfuzz import fuzz
2
+ from src.geometry import is_included
3
+
4
+
5
+ def _normalize_coords(coordinates):
6
+ """Convert coordinates to list of [x, y] pairs.
7
+
8
+ Accepts either [{"x": x, "y": y}, ...] or [[x, y], ...].
9
+ """
10
+ return [
11
+ [coord["x"], coord["y"]] if isinstance(coord, dict) else list(coord)
12
+ for coord in coordinates
13
+ ]
14
 
15
  def calc_nid(
16
  gt_text : list,
 
39
 
40
 
41
  def extract_text(
42
+ gt_data : dict,
43
+ pred_data : dict,
44
  ignore_classes : list = [],
45
  strings_to_remove : list = ["\n"],
46
+ filter_by_gt_area : bool = True,
47
+ ) -> tuple:
48
+ """Extract text from both GT and prediction data, optionally filtering out
49
+ predictions that fall within GT ignored regions.
50
 
51
  Args:
52
+ gt_data (dict): The GT data to extract text from.
53
+ pred_data (dict): The prediction data to extract text from.
54
  ignore_classes (list): A list of classes to ignore during extraction.
55
  strings_to_remove (list): A list of strings to remove from the extracted text.
56
+ filter_by_gt_area (bool): If True, filter out prediction text within GT ignored regions.
57
+ If False, only filter by category. Defaults to True.
58
 
59
  Returns:
60
+ tuple: (gt_text, pred_text) - The concatenated text extracted from GT and predictions.
61
  """
62
 
63
  ignore_classes = [x.lower() for x in ignore_classes]
64
 
65
+ # Collect GT ignored regions' coordinates (only if spatial filtering is enabled)
66
+ gt_ignored_regions = []
67
+ if filter_by_gt_area:
68
+ for elem in gt_data["elements"]:
69
+ if elem["category"].lower() in ignore_classes:
70
+ coords = _normalize_coords(elem["coordinates"])
71
+ gt_ignored_regions.append(coords)
72
+
73
+ # Extract GT text (excluding ignored classes)
74
+ gt_text = ""
75
+ for elem in gt_data["elements"]:
76
  if elem["category"].lower() in ignore_classes:
77
  continue
78
+ gt_text += elem["content"]["text"] + ' '
79
+
80
+ # Extract prediction text (excluding ignored classes AND optionally elements within GT ignored regions)
81
+ pred_text = ""
82
+ if pred_data is not None:
83
+ for elem in pred_data["elements"]:
84
+ if elem["category"].lower() in ignore_classes:
85
+ continue
86
+
87
+ # Check if this prediction element is included in any GT ignored region (only if enabled)
88
+ if filter_by_gt_area:
89
+ elem_coords = _normalize_coords(elem["coordinates"])
90
+ is_in_ignored_region = False
91
+
92
+ for ignored_region in gt_ignored_regions:
93
+ if is_included(ignored_region, elem_coords, soft=0.2):
94
+ is_in_ignored_region = True
95
+ break
96
+
97
+ if is_in_ignored_region:
98
+ continue
99
+
100
+ pred_text += elem["content"]["text"] + ' '
101
+
102
+ # Remove unwanted strings from both texts
103
  for string in strings_to_remove:
104
+ gt_text = gt_text.replace(string, '')
105
+ pred_text = pred_text.replace(string, '')
106
 
107
+ return gt_text, pred_text
108
 
109
 
110
  def evaluate_layout(
111
  gt : dict,
112
  pred : dict,
113
  ignore_classes : list = [],
114
+ filter_by_gt_area : bool = True,
115
+ ) -> tuple:
116
  """Evaluate the layout of the gt against the pred.
117
 
118
  Args:
119
  gt (dict): The gt layout to evaluate.
120
  pred (dict): The pred layout to evaluate against.
121
  ignore_classes (list): A list of classes to ignore during evaluation.
122
+ filter_by_gt_area (bool): If True, filter out prediction text within GT ignored regions.
123
+ If False, only filter by category. Defaults to True.
124
 
125
  Returns:
126
+ tuple: (avg_score, per_image_scores) - The average layout evaluation score and per-image scores dict.
127
  """
128
  scores = []
129
+ per_image_scores = {}
130
+
131
  for image_key in gt.keys():
132
  gt_data = gt.get(image_key)
133
  pred_data = pred.get(image_key)
134
 
135
+ gt_text, pred_text = extract_text(gt_data, pred_data, ignore_classes, filter_by_gt_area=filter_by_gt_area)
 
136
 
137
  score = calc_nid(gt_text, pred_text)
138
 
139
  scores.append(score)
140
+ per_image_scores[image_key] = {
141
+ "nid_score": score / 100.0
142
+ }
143
 
144
  if len(scores) > 0:
145
  avg_score = sum(scores) / (len(scores) * 100)
146
  else:
147
  avg_score = 0
148
 
149
+ return avg_score, per_image_scores
src/table_evaluation.py CHANGED
@@ -7,6 +7,12 @@ A slight modification has been added to the code to improve the evaluation proce
7
 
8
  import re
9
  import distance
 
 
 
 
 
 
10
 
11
  from lxml import etree, html
12
  from collections import deque
@@ -34,6 +40,55 @@ class TableTree(Tree):
34
  result += child.bracket()
35
  return "{{{}}}".format(result)
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  class CustomConfig(Config):
39
  """Custom Configuration for APTED"""
@@ -62,7 +117,12 @@ class CustomConfig(Config):
62
 
63
  class TEDSEvaluator(object):
64
  """Tree Edit Distance basead Similarity"""
65
- def __init__(self, structure_only=False, n_jobs=1, ignore_nodes=None):
 
 
 
 
 
66
  assert isinstance(n_jobs, int) and (n_jobs >= 1), (
67
  'n_jobs must be an integer greather than 1'
68
  )
@@ -84,26 +144,59 @@ class TEDSEvaluator(object):
84
  self.__tokens__ += list(node.tail)
85
 
86
  def load_html_tree(self, node, parent=None):
87
- """Converts HTML tree to the format required by apted"""
 
 
88
  global __tokens__
89
  if node.tag == 'td':
90
- if self.structure_only:
91
- cell = []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  else:
93
- self.__tokens__ = []
94
- self.tokenize(node)
95
- cell = self.__tokens__[1:-1].copy()
96
- new_node = TableTree(
97
- node.tag,
98
- int(node.attrib.get('colspan', '1')),
99
- int(node.attrib.get('rowspan', '1')),
100
- cell, *deque()
101
- )
 
 
 
 
 
102
  else:
103
  new_node = TableTree(node.tag, None, None, None, *deque())
104
- if parent is not None:
105
- parent.children.append(new_node)
106
- if node.tag != 'td':
107
  for n in node.getchildren():
108
  self.load_html_tree(n, new_node)
109
  if parent is None:
@@ -113,13 +206,22 @@ class TEDSEvaluator(object):
113
  """Computes TEDS score between the prediction and the ground truth of a given sample"""
114
  if (not pred) or (not true):
115
  return 0.0
 
116
  parser = html.HTMLParser(remove_comments=True, encoding='utf-8')
117
  pred = html.fromstring(pred, parser=parser)
118
  true = html.fromstring(true, parser=parser)
119
-
120
  if pred.xpath('body/table') and true.xpath('body/table'):
121
- pred = pred.xpath('body/table')[0]
122
- true = true.xpath('body/table')[0]
 
 
 
 
 
 
 
 
 
123
  if self.ignore_nodes:
124
  etree.strip_tags(pred, *self.ignore_nodes)
125
  etree.strip_tags(true, *self.ignore_nodes)
@@ -168,6 +270,228 @@ def extract_tables(data : dict) -> str:
168
  return html
169
 
170
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
171
  def has_table_content(html_data : str) -> bool:
172
  """Check if the table has content between <html><body> and </body></html>.
173
 
@@ -212,41 +536,11 @@ def prepare_table_dataset(gt_data, pred_data):
212
  return gt_table_list, pred_table_list
213
 
214
 
215
- def calc_table_score(gt_string, pred_string, evaluator):
216
- """Calculate the table evaluation score between the gold and pred strings.
217
-
218
- Args:
219
- gt_string (str): The ground truth html string to compare.
220
- pred_string (str): The predicted html string to compare.
221
- evaluator (TEDS/TEDS-S): The TEDS/TEDS-S evaluator to use.
222
- Returns:
223
- float: The table evaluation score.
224
- """
225
- refined_pred = pred_string
226
- refined_gold = gt_string
227
- if pred_string.startswith('<table>') and pred_string.endswith('</table>'):
228
- refined_pred = '<html><body>' + pred_string + '</body></html>'
229
- elif not pred_string.startswith('<html><body><table>') and not pred_string.endswith('</table></body></html>'):
230
- refined_pred = '<html><body><table>' + refined_pred + '</table></body></html>'
231
-
232
- if gt_string.startswith('<table>') and gt_string.endswith('</table>'):
233
- refined_gold = '<html><body>' + gt_string + '</body></html>'
234
- elif not gt_string.startswith('<html><body><table>') and not gt_string.endswith('</table></body></html>'):
235
- refined_gold = '<html><body><table>' + refined_gold + '</table></body></html>'
236
-
237
- # remove thead and tbody
238
- for tok in ['<thead>', '</thead>', '<tbody>', '</tbody>']:
239
- refined_pred = refined_pred.replace(tok, '')
240
- refined_gold = refined_gold.replace(tok, '')
241
-
242
- score = evaluator.evaluate(refined_pred, refined_gold)
243
-
244
- return score
245
-
246
-
247
  def evaluate_table(
248
  gt : dict,
249
- pred : dict
 
 
250
  ) -> tuple:
251
  """Evaluate the table of the gt against the pred.
252
 
@@ -255,35 +549,83 @@ def evaluate_table(
255
  pred (dict): The pred layout to evaluate against.
256
 
257
  Returns:
258
- tuple(float, float): The TEDS and TEDS-S scores for the table evaluation.
259
  """
260
-
261
- gt_table_list, pred_table_list = prepare_table_dataset(gt, pred)
262
-
263
  avg_teds_score = 0.0
264
  avg_teds_s_score = 0.0
265
 
266
- if len(gt_table_list) == 0:
267
- print('[Warning] No tables found in the ground truth dataset.')
268
- elif len(pred_table_list) == 0:
269
- print('[Warning] No tables found in the prediction dataset.')
270
- else:
271
- # Construct Table Evaluator for TEDS
272
- # TEDS only evaluates the structure of the table
273
- table_evaluator = TEDSEvaluator(structure_only=True)
274
- teds_s_scores = []
275
- for gt_table_elem, pred_table_elem in zip(gt_table_list, pred_table_list):
276
- teds_s_score = calc_table_score(gt_table_elem, pred_table_elem, table_evaluator)
277
- teds_s_scores.append(teds_s_score)
278
- avg_teds_s_score= sum(teds_s_scores) / len(teds_s_scores)
279
-
280
- # Construct Table Evaluator for TEDS-S
281
- # TEDS-S evaluates the structure and content of the table
282
- table_evaluator = TEDSEvaluator(structure_only=False)
283
- teds_scores = []
284
- for gt_table_elem, pred_table_elem in zip(gt_table_list, pred_table_list):
285
- teds_score = calc_table_score(gt_table_elem, pred_table_elem, table_evaluator)
286
- teds_scores.append(teds_score)
287
- avg_teds_score = sum(teds_scores) / len(teds_scores)
 
 
 
 
 
 
 
288
 
289
- return avg_teds_score, avg_teds_s_score
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  import re
9
  import distance
10
+ import numpy as np
11
+ from concurrent.futures import ThreadPoolExecutor
12
+ try:
13
+ from scipy.optimize import linear_sum_assignment
14
+ except Exception as _e:
15
+ linear_sum_assignment = None
16
 
17
  from lxml import etree, html
18
  from collections import deque
 
40
  result += child.bracket()
41
  return "{{{}}}".format(result)
42
 
43
+ def visualize(self, indent=0, prefix="", is_last=True):
44
+ """Visualize tree structure in ASCII art format
45
+
46
+ Args:
47
+ indent (int): Current indentation level
48
+ prefix (str): Prefix for tree branches
49
+ is_last (bool): Whether this is the last child of its parent
50
+
51
+ Returns:
52
+ str: ASCII tree visualization
53
+ """
54
+ # Prepare node information
55
+ if self.tag == 'td':
56
+ content_preview = ''
57
+ if self.content:
58
+ content_str = ''.join(self.content) if isinstance(self.content, list) else str(self.content)
59
+ content_preview = content_str[:30] + '...' if len(content_str) > 30 else content_str
60
+ content_preview = f' "{content_preview}"' if content_preview else ''
61
+
62
+ node_info = f"{self.tag}"
63
+ attrs = []
64
+ if self.colspan and self.colspan > 1:
65
+ attrs.append(f"colspan={self.colspan}")
66
+ if self.rowspan and self.rowspan > 1:
67
+ attrs.append(f"rowspan={self.rowspan}")
68
+ if attrs:
69
+ node_info += f" [{', '.join(attrs)}]"
70
+ if content_preview:
71
+ node_info += content_preview
72
+ else:
73
+ node_info = self.tag
74
+
75
+ # Build the tree line
76
+ if indent == 0:
77
+ result = f"{node_info}\n"
78
+ else:
79
+ connector = "└── " if is_last else "├── "
80
+ result = f"{prefix}{connector}{node_info}\n"
81
+
82
+ # Process children
83
+ for i, child in enumerate(self.children):
84
+ is_last_child = (i == len(self.children) - 1)
85
+ if indent == 0:
86
+ child_prefix = ""
87
+ else:
88
+ child_prefix = prefix + (" " if is_last else "│ ")
89
+ result += child.visualize(indent + 1, child_prefix, is_last_child)
90
+
91
+ return result
92
 
93
  class CustomConfig(Config):
94
  """Custom Configuration for APTED"""
 
117
 
118
  class TEDSEvaluator(object):
119
  """Tree Edit Distance basead Similarity"""
120
+ def __init__(
121
+ self,
122
+ structure_only=False,
123
+ n_jobs=1,
124
+ ignore_nodes=None,
125
+ ):
126
  assert isinstance(n_jobs, int) and (n_jobs >= 1), (
127
  'n_jobs must be an integer greather than 1'
128
  )
 
144
  self.__tokens__ += list(node.tail)
145
 
146
  def load_html_tree(self, node, parent=None):
147
+ """Converts HTML tree to the format required by apted
148
+ This version treats nested tables as separate tree nodes rather than content.
149
+ """
150
  global __tokens__
151
  if node.tag == 'td':
152
+ # Check if td contains nested table(s)
153
+ nested_tables = [n for n in node.getchildren() if n.tag == 'table']
154
+
155
+ if nested_tables:
156
+ # td has nested table(s) - create td node and add tables as children
157
+ if self.structure_only:
158
+ cell = []
159
+ else:
160
+ self.__tokens__ = []
161
+ if node.text is not None:
162
+ self.__tokens__ += list(node.text)
163
+ for n in node.getchildren():
164
+ if n.tag != 'table':
165
+ self.tokenize(n)
166
+ if n.tail is not None:
167
+ self.__tokens__ += list(n.tail)
168
+ cell = self.__tokens__.copy() if self.__tokens__ else []
169
+
170
+ new_node = TableTree(
171
+ node.tag,
172
+ int(node.attrib.get('colspan', '1')),
173
+ int(node.attrib.get('rowspan', '1')),
174
+ cell, *deque()
175
+ )
176
+ # Add nested tables as children
177
+ if parent is not None:
178
+ parent.children.append(new_node)
179
+ for table in nested_tables:
180
+ self.load_html_tree(table, new_node)
181
  else:
182
+ if self.structure_only:
183
+ cell = []
184
+ else:
185
+ self.__tokens__ = []
186
+ self.tokenize(node)
187
+ cell = self.__tokens__[1:-1].copy()
188
+ new_node = TableTree(
189
+ node.tag,
190
+ int(node.attrib.get('colspan', '1')),
191
+ int(node.attrib.get('rowspan', '1')),
192
+ cell, *deque()
193
+ )
194
+ if parent is not None:
195
+ parent.children.append(new_node)
196
  else:
197
  new_node = TableTree(node.tag, None, None, None, *deque())
198
+ if parent is not None:
199
+ parent.children.append(new_node)
 
200
  for n in node.getchildren():
201
  self.load_html_tree(n, new_node)
202
  if parent is None:
 
206
  """Computes TEDS score between the prediction and the ground truth of a given sample"""
207
  if (not pred) or (not true):
208
  return 0.0
209
+
210
  parser = html.HTMLParser(remove_comments=True, encoding='utf-8')
211
  pred = html.fromstring(pred, parser=parser)
212
  true = html.fromstring(true, parser=parser)
 
213
  if pred.xpath('body/table') and true.xpath('body/table'):
214
+ pred_tables = pred.xpath('body/table')
215
+ true_tables = true.xpath('body/table')
216
+ # Default behavior: if multiple tables present, compare concatenated wrappers by matching counts
217
+ # Here keep legacy single-table by choosing the first only when both are singletons
218
+ if len(pred_tables) == 1 and len(true_tables) == 1:
219
+ pred = pred_tables[0]
220
+ true = true_tables[0]
221
+ else:
222
+ # Fallback: wrap the entire body as a single root for structural comparison
223
+ pred = pred.xpath('body')[0]
224
+ true = true.xpath('body')[0]
225
  if self.ignore_nodes:
226
  etree.strip_tags(pred, *self.ignore_nodes)
227
  etree.strip_tags(true, *self.ignore_nodes)
 
270
  return html
271
 
272
 
273
+ def extract_table_list(data: dict):
274
+ """Return a list of individual <table>...</table> HTML snippets from a doc dict.
275
+
276
+ Each returned entry is a complete single-table HTML fragment (wrapped with <table> tags).
277
+ """
278
+ tables: list = []
279
+ elements = (data or {}).get('elements', []) or []
280
+ for elem in elements:
281
+ try:
282
+ category = (elem or {}).get('category', '')
283
+ if isinstance(category, str) and category.lower() == 'table':
284
+ html_contents = ((elem or {}).get('content', {}) or {}).get('html') or ''
285
+ if isinstance(html_contents, str):
286
+ tables.append(html_contents)
287
+ elif isinstance(html_contents, list):
288
+ for html_content in html_contents:
289
+ if isinstance(html_content, str):
290
+ tables.append(html_content)
291
+ except Exception:
292
+ continue
293
+ return tables
294
+
295
+ def _simplify_single_table(table_elem):
296
+ """
297
+ Simplify a single table element. (Recursive handling of nested tables)
298
+
299
+ Args:
300
+ table_elem: lxml element - MUST be a <table> element
301
+
302
+ Returns:
303
+ lxml element: The simplified table element
304
+ """
305
+
306
+ # 1. Remove all attributes of the table element
307
+ table_elem.attrib.clear()
308
+
309
+ # 2. Remove thead, tbody, tfoot wrappers (keep their children)
310
+ # Only process wrappers that belong to this table, not nested tables
311
+ for wrapper_tag in ('thead', 'tbody', 'tfoot'):
312
+ # Find all wrappers but only process those directly under this table
313
+ for wrapper in list(table_elem.xpath(f'./{wrapper_tag}')):
314
+ parent = wrapper.getparent()
315
+ index = list(parent).index(wrapper)
316
+ for child in list(wrapper):
317
+ parent.insert(index, child)
318
+ index += 1
319
+ parent.remove(wrapper)
320
+
321
+ # 3. Get direct tr children, excluding those in nested tables
322
+ direct_rows = []
323
+ for tr in table_elem.xpath('.//tr'):
324
+ # Find the closest table ancestor
325
+ parent = tr.getparent()
326
+ while parent is not None and parent.tag != 'table':
327
+ parent = parent.getparent()
328
+ # Only include if the closest table ancestor is our table_elem
329
+ if parent is table_elem:
330
+ direct_rows.append(tr)
331
+
332
+ # 4. Remove attributes of the tr and cell except for colspan and rowspan
333
+ for tr in direct_rows:
334
+ tr.attrib.clear()
335
+ for cell in tr:
336
+ if cell.tag in ('th', 'td'): # Replace th with td
337
+ if cell.tag == 'th':
338
+ cell.tag = 'td'
339
+
340
+ # Keep only colspan and rowspan attributes
341
+ new_attrib = {}
342
+ if 'colspan' in cell.attrib:
343
+ new_attrib['colspan'] = cell.attrib['colspan']
344
+ if 'rowspan' in cell.attrib:
345
+ new_attrib['rowspan'] = cell.attrib['rowspan']
346
+ cell.attrib.clear()
347
+ cell.attrib.update(new_attrib)
348
+
349
+ # Check if there is a nested table
350
+ nested_tables = cell.xpath('.//table')
351
+
352
+ # Recursively handle nested tables
353
+ for nested in nested_tables:
354
+ _simplify_single_table(nested)
355
+
356
+ # Remove unnecessary tags (keep content, remove tag wrapper)
357
+ # These tags are stripped but their text content is preserved
358
+ unnecessary_tags = [
359
+ 'div', 'span', 'p', 'br', 'b', 'i', 'strong', 'em', 'u',
360
+ 'font', 'a', 'sup', 'sub', 'small', 'big', 'center',
361
+ 'label', 'section', 'article', 'header', 'footer', 'nav'
362
+ ]
363
+ etree.strip_tags(cell, *unnecessary_tags)
364
+
365
+ # Get text content (include text of all sub-tags)
366
+ text_content = cell.text_content()
367
+ if text_content:
368
+ text_content = text_content.strip().replace('\xa0', '').replace('&nbsp;', '').strip()
369
+
370
+ # If completely empty, set it as an empty cell (no text and no nested table)
371
+ if (not text_content or text_content == '') and not nested_tables:
372
+ for child in list(cell):
373
+ cell.remove(child)
374
+ cell.text = ''
375
+ cell.tail = None
376
+
377
+ return table_elem
378
+
379
+ def preprocess_table(table_html_list):
380
+ """
381
+ Preprocess the HTML table list to the basic structure.
382
+ Recursively handle nested tables (TINT).
383
+
384
+ Args:
385
+ table_html_list (list): List of HTML table strings
386
+
387
+ Returns:
388
+ list: Simplified list of HTML table strings
389
+ """
390
+ preprocessed_tables = []
391
+
392
+ for html_string in table_html_list:
393
+ try:
394
+ parser = html.HTMLParser(remove_comments=True, encoding='utf-8')
395
+
396
+ # Extract outermost <table>...</table> if exists, otherwise wrap with <table>
397
+ table_start = html_string.find('<table')
398
+ table_end = html_string.rfind('</table>')
399
+ if table_start != -1 and table_end != -1:
400
+ # Extract the outermost table
401
+ html_string = html_string[table_start:table_end + len('</table>')]
402
+ else:
403
+ # No table tag found, wrap content with <table>
404
+ html_string = f'<table>{html_string}</table>'
405
+
406
+ root = html.fromstring(html_string, parser=parser)
407
+
408
+ # root itself might be the table element
409
+ if root.tag == 'table':
410
+ table = root
411
+ else:
412
+ table = root.xpath('.//table')[0]
413
+
414
+ # Simplify the table
415
+ table = _simplify_single_table(table)
416
+
417
+ table_string = etree.tostring(table, encoding='unicode', method='html')
418
+ table_string = '<html><body>' + re.sub(r'>\s+<', '><', table_string).strip() + '</body></html>'
419
+ preprocessed_tables.append(table_string)
420
+
421
+ except Exception as e:
422
+ print(f"[WARNING] Failed to simplify table: {e}, {html_string}")
423
+ preprocessed_tables.append(html_string)
424
+
425
+ return preprocessed_tables
426
+
427
+ def _compute_single_pair_score(args):
428
+ """Helper function to compute score for a single (i, j) pair."""
429
+ i, j, gt_table, pred_table, evaluator = args
430
+ try:
431
+ s = float(evaluator.evaluate(pred_table, gt_table))
432
+ except Exception:
433
+ s = 0.0
434
+ return i, j, s
435
+
436
+ def _compute_teds_s_score(args):
437
+ """Helper function to compute TEDS-S score for a matched pair."""
438
+ gt_table, pred_table, evaluator = args
439
+ try:
440
+ return float(evaluator.evaluate(pred_table, gt_table))
441
+ except Exception:
442
+ return 0.0
443
+
444
+ def _hungarian_match_tables_by_score(
445
+ gt_tables: list,
446
+ pred_tables: list,
447
+ evaluator: TEDSEvaluator,
448
+ min_match_score: float = 0.1,
449
+ max_workers: int = 1,
450
+ ):
451
+ """Hungarian one-to-one matching of GT and Pred tables using the provided evaluator.
452
+
453
+ Returns list of tuples: (gt_idx, pred_idx, score)
454
+ """
455
+ matches: list = []
456
+ if not gt_tables or not pred_tables:
457
+ return matches
458
+ if linear_sum_assignment is None:
459
+ # Fallback: no scipy available, return empty
460
+ return matches
461
+
462
+ n = max(len(gt_tables), len(pred_tables))
463
+ cost = np.zeros((n, n), dtype=float)
464
+ score_mat = np.zeros((n, n), dtype=float)
465
+
466
+ # Initialize all costs to 1.0 (dummy pairs)
467
+ cost.fill(1.0)
468
+
469
+ # Build list of valid (i, j) pairs to compute
470
+ tasks = [
471
+ (i, j, gt_tables[i], pred_tables[j], evaluator)
472
+ for i in range(len(gt_tables))
473
+ for j in range(len(pred_tables))
474
+ ]
475
+
476
+ # Use ThreadPoolExecutor for parallel score computation within this process
477
+ if tasks:
478
+ with ThreadPoolExecutor(max_workers=max_workers) as executor:
479
+ results = list(executor.map(_compute_single_pair_score, tasks))
480
+
481
+ for i, j, s in results:
482
+ score_mat[i, j] = s
483
+ cost[i, j] = 1.0 - s
484
+
485
+ row_ind, col_ind = linear_sum_assignment(cost)
486
+
487
+ for i, j in zip(row_ind, col_ind):
488
+ if i < len(gt_tables) and j < len(pred_tables):
489
+ s = float(score_mat[i, j])
490
+ if s >= min_match_score:
491
+ matches.append((i, j, s))
492
+ return matches
493
+
494
+
495
  def has_table_content(html_data : str) -> bool:
496
  """Check if the table has content between <html><body> and </body></html>.
497
 
 
536
  return gt_table_list, pred_table_list
537
 
538
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
539
  def evaluate_table(
540
  gt : dict,
541
+ pred : dict,
542
+ min_match_score: float = 0.0,
543
+ max_workers: int = 1,
544
  ) -> tuple:
545
  """Evaluate the table of the gt against the pred.
546
 
 
549
  pred (dict): The pred layout to evaluate against.
550
 
551
  Returns:
552
+ tuple(float, float, float, dict): The Table F1, TEDS, TEDS-S scores and per-image results.
553
  """
 
 
 
554
  avg_teds_score = 0.0
555
  avg_teds_s_score = 0.0
556
 
557
+ eval_s = TEDSEvaluator(structure_only=True)
558
+ eval_full = TEDSEvaluator(structure_only=False)
559
+
560
+ n_gt_tables = 0
561
+ n_pred_tables = 0
562
+ n_matched_tables = 0
563
+ teds_scores = []
564
+ teds_s_scores = []
565
+ per_image_scores = {}
566
+
567
+ for image_key in gt.keys():
568
+ gt_elem = gt.get(image_key)
569
+ pred_elem = pred.get(image_key)
570
+ gt_tables = extract_table_list(gt_elem)
571
+ pred_tables = extract_table_list(pred_elem)
572
+
573
+ n_gt_tables += len(gt_tables)
574
+ n_pred_tables += len(pred_tables)
575
+
576
+ # Initialize per-image result
577
+ per_image_scores[image_key] = {
578
+ "n_gt_tables": int(len(gt_tables)),
579
+ "n_pred_tables": int(len(pred_tables)),
580
+ "n_matched_tables": 0,
581
+ "matched_tables": []
582
+ }
583
+
584
+ if not gt_tables and not pred_tables:
585
+ continue
586
 
587
+ # Simplify tables before comparison
588
+ gt_tables = preprocess_table(gt_tables)
589
+ pred_tables = preprocess_table(pred_tables)
590
+ # TEDS (structure+content) for matching via Hungarian
591
+ matches = _hungarian_match_tables_by_score(
592
+ gt_tables, pred_tables, eval_full,
593
+ min_match_score=min_match_score,
594
+ max_workers=max_workers,
595
+ )
596
+
597
+ if matches:
598
+ n_matched_tables += len(matches)
599
+ per_image_scores[image_key]["n_matched_tables"] = int(len(matches))
600
+
601
+ # Extract TEDS scores from matches
602
+ teds_scores.extend([s for _, _, s in matches])
603
+
604
+ # Parallel computation of TEDS-S scores for matched pairs
605
+ teds_s_tasks = [(gt_tables[i], pred_tables[j], eval_s) for i, j, _ in matches]
606
+ with ThreadPoolExecutor(max_workers=max_workers) as executor:
607
+ teds_s_results = list(executor.map(_compute_teds_s_score, teds_s_tasks))
608
+
609
+ # Store results and per-image details
610
+ for (i, j, teds_score), teds_s_score in zip(matches, teds_s_results):
611
+ teds_s_scores.append(teds_s_score)
612
+
613
+ per_image_scores[image_key]["matched_tables"].append({
614
+ "gt_table_idx": int(i),
615
+ "pred_table_idx": int(j),
616
+ "teds_score": float(teds_score),
617
+ "teds_s_score": float(teds_s_score)
618
+ })
619
+
620
+ if len(teds_scores) > 0:
621
+ table_f1_score = 2 * n_matched_tables / (n_gt_tables + n_pred_tables)
622
+ avg_teds_score = sum(teds_scores) / len(teds_scores)
623
+ avg_teds_s_score = sum(teds_s_scores) / len(teds_s_scores)
624
+ else:
625
+ print('[Warning] No matched tables found in the ground truth and prediction datasets.')
626
+ table_f1_score = 0.0
627
+ avg_teds_score = 0.0
628
+ avg_teds_s_score = 0.0
629
+
630
+
631
+ return table_f1_score, avg_teds_score, avg_teds_s_score, per_image_scores
src/utils.py CHANGED
@@ -1,6 +1,6 @@
1
  import json
2
  from pathlib import Path
3
- from typing import List
4
 
5
 
6
  def read_file(path: str, supported_formats: str = ".json") -> dict:
@@ -79,17 +79,27 @@ def read_file_paths(path: str, supported_formats: List[str] = [".jpg"]) -> List[
79
  return file_paths
80
 
81
 
82
- def check_dataset_format(data: dict, image_key: str) -> None:
83
  """Check the format of the dataset
84
 
85
  Args:
86
  data (dict): the gt/prediction dataset to check
87
  image_key (str): the image name acting as the key in the dataset
 
 
 
 
88
 
89
  Raises:
90
- ValueError: if a key is missing in the dataset
91
  """
 
 
 
 
92
  if data[image_key].get("elements") is None:
 
 
93
  raise ValueError(
94
  f"{image_key} does not have 'elements' key in the json file. "
95
  "Check if you are passing the correct data."
@@ -98,12 +108,16 @@ def check_dataset_format(data: dict, image_key: str) -> None:
98
  elements = data[image_key]["elements"]
99
  for elem in elements:
100
  if elem.get("category") is None:
 
 
101
  raise ValueError(
102
  f"{image_key} does not have 'category' key in the ground truth file. "
103
  "Check if you are passing the correct data."
104
  )
105
 
106
  if elem.get("content") is None:
 
 
107
  raise ValueError(
108
  f"{image_key} does not have 'content' key in the ground truth file. "
109
  "Check if you are passing the correct data."
@@ -111,19 +125,26 @@ def check_dataset_format(data: dict, image_key: str) -> None:
111
  else:
112
  content = elem["content"]
113
  if content.get("text") is None:
 
 
114
  raise ValueError(
115
  f"{image_key} does not have 'text' key in the ground truth file. "
116
  "Check if you are passing the correct data."
117
  )
 
 
118
 
119
 
120
- def check_data_validity(gt_data: dict, pred_data: dict) -> None:
121
  """Check the validity of the ground truth and prediction data
122
 
123
  Args:
124
  gt_data (dict): the ground truth data
125
  pred_data (dict): the prediction data
126
 
 
 
 
127
  Raises:
128
  ValueError: if the ground truth or prediction data is invalid
129
  """
@@ -134,16 +155,135 @@ def check_data_validity(gt_data: dict, pred_data: dict) -> None:
134
  if not pred_data:
135
  raise ValueError("Prediction data is empty")
136
 
 
137
  for image_key in gt_data.keys():
138
- pred_elem = pred_data.get(image_key)
139
- if pred_data is None:
140
- raise ValueError(
141
- f"{image_key} not found in prediction. "
142
- "Check if you are passing the correct data."
143
- )
144
 
 
 
 
 
 
145
  for image_key in gt_data.keys():
146
- check_dataset_format(gt_data, image_key)
 
 
 
 
 
 
 
 
147
 
148
- for image_key in pred_data.keys():
149
- check_dataset_format(pred_data, image_key)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  import json
2
  from pathlib import Path
3
+ from typing import List, Dict, Any
4
 
5
 
6
  def read_file(path: str, supported_formats: str = ".json") -> dict:
 
79
  return file_paths
80
 
81
 
82
+ def check_dataset_format(data: dict, image_key: str, is_prediction: bool = False) -> bool:
83
  """Check the format of the dataset
84
 
85
  Args:
86
  data (dict): the gt/prediction dataset to check
87
  image_key (str): the image name acting as the key in the dataset
88
+ is_prediction (bool): if True, skip entries with errors instead of raising
89
+
90
+ Returns:
91
+ bool: True if valid, False if entry has error (only when is_prediction=True)
92
 
93
  Raises:
94
+ ValueError: if a key is missing in the dataset (only when is_prediction=False)
95
  """
96
+ # For predictions, allow entries with "error" key - they will be skipped
97
+ if is_prediction and "error" in data[image_key]:
98
+ return False
99
+
100
  if data[image_key].get("elements") is None:
101
+ if is_prediction:
102
+ return False
103
  raise ValueError(
104
  f"{image_key} does not have 'elements' key in the json file. "
105
  "Check if you are passing the correct data."
 
108
  elements = data[image_key]["elements"]
109
  for elem in elements:
110
  if elem.get("category") is None:
111
+ if is_prediction:
112
+ return False
113
  raise ValueError(
114
  f"{image_key} does not have 'category' key in the ground truth file. "
115
  "Check if you are passing the correct data."
116
  )
117
 
118
  if elem.get("content") is None:
119
+ if is_prediction:
120
+ return False
121
  raise ValueError(
122
  f"{image_key} does not have 'content' key in the ground truth file. "
123
  "Check if you are passing the correct data."
 
125
  else:
126
  content = elem["content"]
127
  if content.get("text") is None:
128
+ if is_prediction:
129
+ return False
130
  raise ValueError(
131
  f"{image_key} does not have 'text' key in the ground truth file. "
132
  "Check if you are passing the correct data."
133
  )
134
+
135
+ return True
136
 
137
 
138
+ def check_data_validity(gt_data: dict, pred_data: dict) -> tuple:
139
  """Check the validity of the ground truth and prediction data
140
 
141
  Args:
142
  gt_data (dict): the ground truth data
143
  pred_data (dict): the prediction data
144
 
145
+ Returns:
146
+ tuple: (valid_keys, error_keys, missing_keys) - lists of valid, error, and missing prediction keys
147
+
148
  Raises:
149
  ValueError: if the ground truth or prediction data is invalid
150
  """
 
155
  if not pred_data:
156
  raise ValueError("Prediction data is empty")
157
 
158
+ # Check ground truth format (must be valid)
159
  for image_key in gt_data.keys():
160
+ check_dataset_format(gt_data, image_key, is_prediction=False)
 
 
 
 
 
161
 
162
+ # Check prediction format (allow errors)
163
+ valid_keys = []
164
+ error_keys = []
165
+ missing_keys = []
166
+
167
  for image_key in gt_data.keys():
168
+ pred_elem = pred_data.get(image_key)
169
+ if pred_elem is None:
170
+ missing_keys.append(image_key)
171
+ elif check_dataset_format(pred_data, image_key, is_prediction=True):
172
+ valid_keys.append(image_key)
173
+ else:
174
+ error_keys.append(image_key)
175
+
176
+ return valid_keys, error_keys, missing_keys
177
 
178
+ def _merge_bboxes(bboxes: List[List[float]]) -> List[float]:
179
+ """
180
+ Merge multiple bounding boxes into a single union bbox.
181
+
182
+ Args:
183
+ bboxes: List of bboxes in 4-corner format [{'x': x1, 'y': y1}, {'x': x2, 'y': y1}, {'x': x2, 'y': y2}, {'x': x1, 'y': y2}]
184
+
185
+ Returns:
186
+ Union bbox covering all input boxes: [x1_min, y1_min, x2_max, y2_max]
187
+ """
188
+ if not bboxes:
189
+ return [0.0, 0.0, 0.0, 0.0]
190
+ x1_min = min(bbox[0]['x'] for bbox in bboxes)
191
+ y1_min = min(bbox[1]['y'] for bbox in bboxes)
192
+ x2_max = max(bbox[2]['x'] for bbox in bboxes)
193
+ y2_max = max(bbox[3]['y'] for bbox in bboxes)
194
+
195
+ return [x1_min, y1_min, x2_max, y2_max]
196
+
197
+ def preprocess_merged_tables(data: Dict[str, Any]) -> Dict[str, Any]:
198
+ """
199
+ Preprocess data to merge tables according to merged_tables information.
200
+
201
+ For each document with merged_tables:
202
+ 1. Remove child tables from elements list
203
+ 2. Keep the representative (first) table
204
+ 3. Replace representative table's HTML with merged_html
205
+ 4. Update representative table's BBox to union of all child bboxes
206
+
207
+ Args:
208
+ data: Dictionary of document data in OAC format
209
+
210
+ Returns:
211
+ Preprocessed data with merged tables
212
+ """
213
+ processed_data = {}
214
+
215
+ for doc_key, doc in data.items():
216
+ if not isinstance(doc, dict):
217
+ processed_data[doc_key] = doc
218
+ continue
219
+
220
+ # Deep copy to avoid modifying original
221
+ doc_copy = json.loads(json.dumps(doc))
222
+
223
+ merged_tables = doc_copy.get("merged_tables", [])
224
+ if not merged_tables:
225
+ # No merging needed
226
+ processed_data[doc_key] = doc_copy
227
+ continue
228
+
229
+ elements = doc_copy.get("elements", [])
230
+ if not elements:
231
+ processed_data[doc_key] = doc_copy
232
+ continue
233
+
234
+ # Build element "page-id" to index mapping for quick lookup
235
+ # Format: "page-id" (e.g., "1-7") -> element index
236
+ key_to_idx = {}
237
+ for idx, elem in enumerate(elements):
238
+ page = elem.get("page")
239
+ elem_id = elem.get("id")
240
+ if page is not None and elem_id is not None:
241
+ key = f"{int(page)}-{int(elem_id)}"
242
+ key_to_idx[key] = idx
243
+
244
+ # Track indices to remove (child tables that are merged)
245
+ indices_to_remove = set()
246
+
247
+ # Process each merge group
248
+ for merge_group in merged_tables:
249
+ table_ids = merge_group.get("table_ids", [])
250
+ if len(table_ids) < 2:
251
+ continue
252
+
253
+ # table_ids are strings in format "page-id" (e.g., "1-7", "2-11")
254
+ # No conversion needed, use as-is
255
+ table_ids = sorted([str(tid) for tid in table_ids])
256
+
257
+ # Get merged HTML - check both 'html' and 'table_html' keys
258
+ merged_html = merge_group.get("table_html") or merge_group.get("html")
259
+
260
+ # Find representative table (first one in the list)
261
+ representative_id = table_ids[0]
262
+ representative_idx = key_to_idx.get(representative_id)
263
+
264
+ # Collect bboxes from all tables in the merge group
265
+ bboxes_to_merge = []
266
+ for tid in table_ids:
267
+ idx = key_to_idx.get(tid)
268
+ elem = elements[idx]
269
+ bboxes_to_merge.append(elem.get("coordinates"))
270
+ if tid != representative_id:
271
+ indices_to_remove.add(idx)
272
+
273
+ # Update representative table
274
+ elements[representative_idx]["content"]["html"] = merged_html
275
+ elements[representative_idx]["coordinates"] = _merge_bboxes(bboxes_to_merge)
276
+
277
+ # Remove child tables (in reverse order to maintain indices)
278
+ new_elements = [
279
+ elem for idx, elem in enumerate(elements)
280
+ if idx not in indices_to_remove
281
+ ]
282
+ doc_copy["elements"] = new_elements
283
+
284
+ # Remove merged_tables key as it's been applied
285
+ doc_copy.pop("merged_tables", None)
286
+
287
+ processed_data[doc_key] = doc_copy
288
+
289
+ return processed_data