davanstrien HF Staff commited on
Commit
d1b1a48
·
verified ·
1 Parent(s): 800963f

Add pp-doclayout.py: PP-DocLayout-L layout detection with bucket support

Browse files

PaddleOCR PP-DocLayout-L for document layout detection (text/title/table/figure/formula/...). Supports HF dataset and HF bucket source/sink, incremental parquet shards with resumable runs (snapshot-backed), JP2 input. Verified end-to-end on L4: dataset->dataset, dataset->bucket, bucket->dataset, bucket->bucket, resume.

Files changed (1) hide show
  1. pp-doclayout.py +1159 -0
pp-doclayout.py ADDED
@@ -0,0 +1,1159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "paddlepaddle-gpu>=3.0.0",
5
+ # "paddleocr>=3.0.0",
6
+ # "opencv-contrib-python-headless",
7
+ # "datasets>=4.0.0",
8
+ # "huggingface-hub>=1.6.0",
9
+ # "pyarrow>=15.0",
10
+ # "pillow",
11
+ # "numpy",
12
+ # "tqdm",
13
+ # ]
14
+ #
15
+ # [tool.uv]
16
+ # # PaddleOCR/PaddleX pull in opencv-contrib-python (full) which needs system
17
+ # # libGL.so.1 — not present in the slim uv-on-bookworm image used by HF Jobs.
18
+ # # Swap to the headless cv2 variant (same `import cv2`, no GUI deps).
19
+ # override-dependencies = [
20
+ # "opencv-contrib-python ; python_version < '0'",
21
+ # "opencv-python ; python_version < '0'",
22
+ # ]
23
+ #
24
+ # [[tool.uv.index]]
25
+ # name = "paddle"
26
+ # url = "https://www.paddlepaddle.org.cn/packages/stable/cu126/"
27
+ # explicit = true
28
+ #
29
+ # [tool.uv.sources]
30
+ # paddlepaddle-gpu = { index = "paddle" }
31
+ # ///
32
+
33
+ """
34
+ Detect document layout regions (text/title/table/figure/formula/...) with PP-DocLayout-L.
35
+
36
+ Runs PaddleOCR's PP-DocLayout-L (or M / S / plus-L variant) over an image source
37
+ and emits per-image bounding-box predictions. Unlike the OCR scripts in this repo
38
+ this does NOT extract text — it only locates and classifies regions.
39
+
40
+ Source can be:
41
+ - HF dataset repo (default): "namespace/dataset"
42
+ - HF bucket of image files: "hf://buckets/namespace/bucket/optional/prefix"
43
+
44
+ Sink can be:
45
+ - HF dataset repo (default): "namespace/dataset" (one push at end + dataset card)
46
+ - HF bucket: "hf://buckets/namespace/bucket/run-name" (incremental parquet
47
+ shards, resumable, no git overhead)
48
+
49
+ Output schema (column `layout` is a JSON string):
50
+ [{"bbox": [x1, y1, x2, y2], "label": "text", "score": 0.97, "cls_id": 2}, ...]
51
+
52
+ Coordinates are in the original input-image pixel space.
53
+
54
+ Example commands:
55
+
56
+ # Dataset -> dataset (smoke on L4)
57
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
58
+ davanstrien/ufo-ColPali pp-doclayout-smoke \\
59
+ --max-samples 3 --shuffle --seed 42 --private
60
+
61
+ # Dataset -> bucket (incremental shards, resumable)
62
+ hf buckets create davanstrien/pp-doclayout-scratch --exist-ok
63
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
64
+ davanstrien/ufo-ColPali \\
65
+ hf://buckets/davanstrien/pp-doclayout-scratch/run1 \\
66
+ --max-samples 20 --shard-size 5
67
+
68
+ # Bucket of images -> dataset
69
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
70
+ hf://buckets/davanstrien/pp-doclayout-images \\
71
+ pp-doclayout-from-bucket --private
72
+ """
73
+
74
+ import argparse
75
+ import io
76
+ import json
77
+ import logging
78
+ import os
79
+ import sys
80
+ import time
81
+ from dataclasses import dataclass
82
+ from datetime import datetime, timezone
83
+ from pathlib import Path
84
+ from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
85
+
86
+ import numpy as np
87
+ from PIL import Image
88
+ from tqdm.auto import tqdm
89
+
90
+ logging.basicConfig(level=logging.INFO)
91
+ logger = logging.getLogger(__name__)
92
+
93
+
94
+ # ---------------------------------------------------------------------------
95
+ # Constants
96
+ # ---------------------------------------------------------------------------
97
+
98
+ VALID_MODELS = [
99
+ "PP-DocLayout-L",
100
+ "PP-DocLayout-M",
101
+ "PP-DocLayout-S",
102
+ "PP-DocLayout_plus-L",
103
+ ]
104
+
105
+ MODEL_SIZES = {
106
+ "PP-DocLayout-L": "~123M params (RT-DETR-L backbone)",
107
+ "PP-DocLayout-M": "~22M params (PicoDet-M)",
108
+ "PP-DocLayout-S": "~4M params (PicoDet-S)",
109
+ "PP-DocLayout_plus-L": "~123M params, 20-class plus variant",
110
+ }
111
+
112
+ IMAGE_EXTENSIONS = {
113
+ ".jpg", ".jpeg", ".png", ".tif", ".tiff", ".webp", ".bmp", ".jp2", ".j2k",
114
+ }
115
+
116
+ BUCKET_PREFIX = "hf://buckets/"
117
+
118
+
119
+ # ---------------------------------------------------------------------------
120
+ # URL helpers
121
+ # ---------------------------------------------------------------------------
122
+
123
+
124
+ def is_bucket_url(s: str) -> bool:
125
+ return s.startswith(BUCKET_PREFIX)
126
+
127
+
128
+ def parse_bucket_url(url: str) -> Tuple[str, str]:
129
+ """Split `hf://buckets/ns/bucket/path/in/bucket` into (`ns/bucket`, `path/in/bucket`)."""
130
+ if not is_bucket_url(url):
131
+ raise ValueError(f"Not a bucket URL: {url}")
132
+ rest = url[len(BUCKET_PREFIX) :].strip("/")
133
+ parts = rest.split("/", 2)
134
+ if len(parts) < 2:
135
+ raise ValueError(
136
+ f"Bucket URL must include namespace and bucket name: {url}"
137
+ )
138
+ bucket_id = f"{parts[0]}/{parts[1]}"
139
+ prefix = parts[2] if len(parts) > 2 else ""
140
+ return bucket_id, prefix
141
+
142
+
143
+ # ---------------------------------------------------------------------------
144
+ # Image helpers
145
+ # ---------------------------------------------------------------------------
146
+
147
+
148
+ def to_pil(image: Union[Image.Image, Dict[str, Any], str, bytes]) -> Image.Image:
149
+ if isinstance(image, Image.Image):
150
+ return image.convert("RGB")
151
+ if isinstance(image, dict) and "bytes" in image:
152
+ return Image.open(io.BytesIO(image["bytes"])).convert("RGB")
153
+ if isinstance(image, (bytes, bytearray)):
154
+ return Image.open(io.BytesIO(image)).convert("RGB")
155
+ if isinstance(image, str):
156
+ return Image.open(image).convert("RGB")
157
+ raise ValueError(f"Unsupported image type: {type(image)}")
158
+
159
+
160
+ def pil_to_array(pil_img: Image.Image) -> np.ndarray:
161
+ """RGB PIL -> uint8 ndarray. PaddleOCR's predict() accepts numpy arrays directly."""
162
+ return np.asarray(pil_img, dtype=np.uint8)
163
+
164
+
165
+ # ---------------------------------------------------------------------------
166
+ # Result extraction
167
+ # ---------------------------------------------------------------------------
168
+
169
+
170
+ def extract_detections(result: Any) -> List[Dict[str, Any]]:
171
+ """Pull a clean list of detections out of a paddleocr LayoutDetection result."""
172
+ payload = result.json
173
+ res = payload.get("res", payload) if isinstance(payload, dict) else {}
174
+ boxes = res.get("boxes", []) if isinstance(res, dict) else []
175
+ detections = []
176
+ for box in boxes:
177
+ coord = box.get("coordinate") or box.get("bbox") or []
178
+ coord = [float(x) for x in coord]
179
+ detections.append(
180
+ {
181
+ "bbox": coord,
182
+ "label": box.get("label"),
183
+ "score": float(box.get("score", 0.0)),
184
+ "cls_id": int(box.get("cls_id", -1)),
185
+ }
186
+ )
187
+ return detections
188
+
189
+
190
+ # ---------------------------------------------------------------------------
191
+ # Sources
192
+ # ---------------------------------------------------------------------------
193
+
194
+
195
+ @dataclass
196
+ class SourceItem:
197
+ key: str # stable identifier per image (used for dedup/resume)
198
+ image: Image.Image
199
+ extras: Dict[str, Any] # original row fields (only populated for dataset source)
200
+
201
+
202
+ def iter_dataset_images(
203
+ dataset_id: str,
204
+ image_column: str,
205
+ split: str,
206
+ shuffle: bool,
207
+ seed: int,
208
+ max_samples: Optional[int],
209
+ ):
210
+ """Iterate (key, PIL) pairs from an HF dataset repo.
211
+
212
+ Returns: (iterator, total, dataset_reference). The dataset reference is the
213
+ post-shuffle/post-select Dataset, kept around so the dataset-repo sink can
214
+ `add_column("layout", ...)` and preserve the original schema (especially
215
+ Image-type columns).
216
+ """
217
+ from datasets import load_dataset
218
+
219
+ logger.info(f"Loading dataset: {dataset_id} (split={split})")
220
+ ds = load_dataset(dataset_id, split=split)
221
+
222
+ if image_column not in ds.column_names:
223
+ raise ValueError(
224
+ f"Column '{image_column}' not found. Available: {ds.column_names}"
225
+ )
226
+
227
+ if shuffle:
228
+ logger.info(f"Shuffling with seed {seed}")
229
+ ds = ds.shuffle(seed=seed)
230
+ if max_samples:
231
+ ds = ds.select(range(min(max_samples, len(ds))))
232
+ logger.info(f"Limited to {len(ds)} samples")
233
+
234
+ total = len(ds)
235
+
236
+ def gen() -> Iterator[SourceItem]:
237
+ for i in range(total):
238
+ row = ds[i]
239
+ yield SourceItem(
240
+ key=f"row-{i:08d}",
241
+ image=to_pil(row[image_column]),
242
+ extras={}, # original schema is preserved by the sink via the dataset ref
243
+ )
244
+
245
+ return gen(), total, ds
246
+
247
+
248
+ SOURCE_PATHS_SNAPSHOT = "_source_paths.json"
249
+
250
+
251
+ def _bucket_snapshot_path(output_url: str) -> Tuple[str, str]:
252
+ """Return (bucket_id, key) for the source-paths snapshot inside an output bucket."""
253
+ out_bucket_id, out_prefix = parse_bucket_url(output_url)
254
+ snapshot_key = (
255
+ f"{out_prefix}/{SOURCE_PATHS_SNAPSHOT}".lstrip("/")
256
+ if out_prefix
257
+ else SOURCE_PATHS_SNAPSHOT
258
+ )
259
+ return out_bucket_id, snapshot_key
260
+
261
+
262
+ def iter_bucket_images(
263
+ bucket_url: str,
264
+ shuffle: bool,
265
+ seed: int,
266
+ max_samples: Optional[int],
267
+ hf_token: Optional[str],
268
+ output_url: Optional[str] = None,
269
+ ) -> Tuple[Iterator[SourceItem], int]:
270
+ """Glob image files under a bucket prefix and stream them via HfFileSystem.
271
+
272
+ If `output_url` is a bucket, the resolved source-path list is snapshotted to
273
+ `<output>/_source_paths.json` on first run. Subsequent runs against the same
274
+ output prefix reuse that snapshot, so resume stays consistent even if the
275
+ source bucket grows or `--shuffle`/`--max-samples` would otherwise pick a
276
+ different subset on the second run.
277
+ """
278
+ from huggingface_hub import HfApi, HfFileSystem
279
+
280
+ bucket_id, prefix = parse_bucket_url(bucket_url)
281
+ fs = HfFileSystem(token=hf_token)
282
+ base = f"{BUCKET_PREFIX}{bucket_id}/{prefix}".rstrip("/")
283
+
284
+ snapshot_bucket_id: Optional[str] = None
285
+ snapshot_key: Optional[str] = None
286
+ cached_paths: Optional[List[str]] = None
287
+
288
+ if output_url and is_bucket_url(output_url):
289
+ snapshot_bucket_id, snapshot_key = _bucket_snapshot_path(output_url)
290
+ snapshot_url = f"{BUCKET_PREFIX}{snapshot_bucket_id}/{snapshot_key}"
291
+ try:
292
+ with fs.open(snapshot_url, "rb") as f:
293
+ snapshot = json.load(f)
294
+ if snapshot.get("source_url") != bucket_url:
295
+ logger.warning(
296
+ f"Output prefix already has a snapshot referencing a "
297
+ f"different source ({snapshot.get('source_url')!r} vs "
298
+ f"{bucket_url!r}). Ignoring and re-listing."
299
+ )
300
+ else:
301
+ cached_paths = snapshot["paths"]
302
+ logger.info(
303
+ f"Reusing existing snapshot of {len(cached_paths)} source paths "
304
+ f"(written {snapshot.get('created_at', 'unknown')})"
305
+ )
306
+ except FileNotFoundError:
307
+ pass
308
+ except Exception as e:
309
+ logger.warning(f"Could not read existing snapshot ({e}); re-listing.")
310
+
311
+ if cached_paths is not None:
312
+ all_paths = cached_paths
313
+ else:
314
+ logger.info(f"Listing images under {base}")
315
+ all_paths = []
316
+ try:
317
+ for entry in fs.find(base, detail=False):
318
+ ext = Path(entry).suffix.lower()
319
+ if ext in IMAGE_EXTENSIONS:
320
+ all_paths.append(entry)
321
+ except FileNotFoundError as e:
322
+ raise ValueError(f"Bucket prefix not found: {base}") from e
323
+
324
+ if not all_paths:
325
+ raise ValueError(
326
+ f"No image files (any of {sorted(IMAGE_EXTENSIONS)}) under {base}"
327
+ )
328
+
329
+ all_paths.sort()
330
+ if shuffle:
331
+ rng = np.random.default_rng(seed)
332
+ rng.shuffle(all_paths)
333
+ if max_samples:
334
+ all_paths = all_paths[:max_samples]
335
+
336
+ # Persist the chosen list so resume runs see exactly this set.
337
+ if snapshot_bucket_id is not None and snapshot_key is not None:
338
+ api = HfApi(token=hf_token)
339
+ payload = {
340
+ "source_url": bucket_url,
341
+ "shuffle": shuffle,
342
+ "seed": seed,
343
+ "max_samples": max_samples,
344
+ "created_at": datetime.now(timezone.utc).isoformat(),
345
+ "paths": all_paths,
346
+ }
347
+ api.batch_bucket_files(
348
+ snapshot_bucket_id,
349
+ add=[(json.dumps(payload).encode(), snapshot_key)],
350
+ token=hf_token,
351
+ )
352
+ logger.info(
353
+ f"Wrote source-path snapshot ({len(all_paths)} paths) to "
354
+ f"hf://buckets/{snapshot_bucket_id}/{snapshot_key}"
355
+ )
356
+
357
+ total = len(all_paths)
358
+ logger.info(f"Found {total} images in bucket")
359
+
360
+ def key_for(path: str) -> str:
361
+ # Use the full bucket path (`buckets/<id>/<rel>`) as returned by
362
+ # fs.find. This is stable across reruns (so resume works), and the
363
+ # stored value in `source_path` is fully addressable — open via
364
+ # HfFileSystem directly with `hf://` re-prepended.
365
+ return path
366
+
367
+ def gen() -> Iterator[SourceItem]:
368
+ for path in all_paths:
369
+ with fs.open(path, "rb") as f:
370
+ data = f.read()
371
+ yield SourceItem(
372
+ key=key_for(path),
373
+ image=to_pil(data),
374
+ extras={"__source_path": key_for(path)},
375
+ )
376
+
377
+ return gen(), total
378
+
379
+
380
+ # ---------------------------------------------------------------------------
381
+ # Sinks
382
+ # ---------------------------------------------------------------------------
383
+
384
+
385
+ class DatasetRepoSink:
386
+ """Buffer all results in memory, push once at end with dataset card + inference_info.
387
+
388
+ Two modes:
389
+ - `original_dataset` provided (dataset-repo source): preserve the source
390
+ schema (including Image-type columns) and just `add_column("layout", ...)`.
391
+ - `original_dataset` is None (bucket-image source): build a Dataset from
392
+ collected rows containing __source_path + layout.
393
+ """
394
+
395
+ def __init__(
396
+ self,
397
+ repo_id: str,
398
+ *,
399
+ hf_token: Optional[str],
400
+ private: bool,
401
+ config: Optional[str],
402
+ create_pr: bool,
403
+ source_id: str,
404
+ original_dataset=None,
405
+ ):
406
+ self.repo_id = repo_id
407
+ self.hf_token = hf_token
408
+ self.private = private
409
+ self.config = config
410
+ self.create_pr = create_pr
411
+ self.source_id = source_id
412
+ self.original_dataset = original_dataset
413
+ # Used when original_dataset is None: row-by-row buffer.
414
+ self._rows: List[Dict[str, Any]] = []
415
+ # Used when original_dataset is set: ordered layouts aligned with dataset rows.
416
+ self._layouts: List[str] = []
417
+
418
+ @property
419
+ def kind(self) -> str:
420
+ return "dataset"
421
+
422
+ def already_done(self) -> set:
423
+ return set() # dataset sink does a single push, no resume
424
+
425
+ def write(self, key: str, layout: List[Dict[str, Any]], extras: Dict[str, Any]) -> None:
426
+ layout_json = json.dumps(layout, ensure_ascii=False)
427
+ if self.original_dataset is not None:
428
+ self._layouts.append(layout_json)
429
+ return
430
+ row = {"__source_key": key, "layout": layout_json}
431
+ for k, v in extras.items():
432
+ if isinstance(v, (str, int, float, bool)) or v is None:
433
+ row[k] = v
434
+ self._rows.append(row)
435
+
436
+ def finalize(self, model_id: str, args_dict: Dict[str, Any]) -> None:
437
+ from datasets import Dataset
438
+
439
+ if self.original_dataset is not None:
440
+ if len(self._layouts) != len(self.original_dataset):
441
+ logger.warning(
442
+ f"Layout count ({len(self._layouts)}) != dataset rows "
443
+ f"({len(self.original_dataset)}); padding with empty layouts."
444
+ )
445
+ # Pad to keep add_column happy.
446
+ while len(self._layouts) < len(self.original_dataset):
447
+ self._layouts.append("[]")
448
+ ds = self.original_dataset.add_column("layout", self._layouts)
449
+ else:
450
+ if not self._rows:
451
+ logger.warning("No rows produced; nothing to push.")
452
+ return
453
+ ds = Dataset.from_list(self._rows)
454
+ if "__source_key" in ds.column_names:
455
+ ds = ds.rename_column("__source_key", "source_path")
456
+
457
+ inference_entry = build_inference_entry(model_id, args_dict)
458
+
459
+ if "inference_info" in ds.column_names:
460
+ logger.info("Updating existing inference_info column")
461
+
462
+ def _update(example):
463
+ try:
464
+ existing = (
465
+ json.loads(example["inference_info"])
466
+ if example["inference_info"]
467
+ else []
468
+ )
469
+ except (json.JSONDecodeError, TypeError):
470
+ existing = []
471
+ existing.append(inference_entry)
472
+ return {"inference_info": json.dumps(existing)}
473
+
474
+ ds = ds.map(_update)
475
+ else:
476
+ ds = ds.add_column(
477
+ "inference_info", [json.dumps([inference_entry])] * len(ds)
478
+ )
479
+
480
+ logger.info(f"Pushing {len(ds)} rows to {self.repo_id}")
481
+ push_kwargs = {
482
+ "private": self.private,
483
+ "token": self.hf_token,
484
+ "max_shard_size": "500MB",
485
+ "create_pr": self.create_pr,
486
+ "commit_message": f"Add PP-DocLayout layout predictions ({len(ds)} samples)"
487
+ + (f" [{self.config}]" if self.config else ""),
488
+ }
489
+ if self.config:
490
+ push_kwargs["config_name"] = self.config
491
+
492
+ max_retries = 3
493
+ for attempt in range(1, max_retries + 1):
494
+ try:
495
+ if attempt > 1:
496
+ logger.warning("Disabling XET (fallback to HTTP upload)")
497
+ os.environ["HF_HUB_DISABLE_XET"] = "1"
498
+ ds.push_to_hub(self.repo_id, **push_kwargs)
499
+ break
500
+ except Exception as e:
501
+ logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
502
+ if attempt == max_retries:
503
+ logger.error("All upload attempts failed.")
504
+ raise
505
+ time.sleep(30 * (2 ** (attempt - 1)))
506
+
507
+ # Dataset card
508
+ from huggingface_hub import DatasetCard
509
+
510
+ card = DatasetCard(
511
+ create_dataset_card(
512
+ source=self.source_id,
513
+ model_name=args_dict["model_name"],
514
+ num_samples=len(ds),
515
+ processing_time=args_dict["processing_time"],
516
+ output_column="layout",
517
+ threshold=args_dict["threshold"],
518
+ layout_nms=args_dict["layout_nms"],
519
+ )
520
+ )
521
+ card.push_to_hub(self.repo_id, token=self.hf_token)
522
+ logger.info(
523
+ f"Done: https://huggingface.co/datasets/{self.repo_id}"
524
+ )
525
+
526
+
527
+ class BucketShardSink:
528
+ """Write incremental parquet shards to a bucket prefix. Resumable."""
529
+
530
+ METADATA_FILE = "_metadata.json"
531
+ SHARD_PATTERN = "shard-{:05d}.parquet"
532
+
533
+ def __init__(
534
+ self,
535
+ bucket_url: str,
536
+ *,
537
+ hf_token: Optional[str],
538
+ shard_size: int,
539
+ include_images: bool,
540
+ resume: bool,
541
+ source_id: str,
542
+ ):
543
+ from huggingface_hub import HfApi, HfFileSystem, create_bucket
544
+
545
+ self.bucket_url = bucket_url
546
+ self.bucket_id, self.prefix = parse_bucket_url(bucket_url)
547
+ self.hf_token = hf_token
548
+ self.shard_size = shard_size
549
+ self.include_images = include_images
550
+ self.resume = resume
551
+ self.source_id = source_id
552
+
553
+ self._api = HfApi(token=hf_token)
554
+ self._fs = HfFileSystem(token=hf_token)
555
+
556
+ # Make sure the bucket exists. Path inside the bucket is created lazily on first write.
557
+ try:
558
+ create_bucket(self.bucket_id, exist_ok=True, token=hf_token)
559
+ except Exception as e:
560
+ # If we don't have create rights but the bucket already exists, that's fine.
561
+ logger.warning(f"create_bucket('{self.bucket_id}') warning: {e}")
562
+
563
+ self._buffer: List[Dict[str, Any]] = []
564
+ self._next_shard_idx = self._discover_next_shard_idx()
565
+ self._completed_keys = self._discover_completed_keys() if resume else set()
566
+ if self._completed_keys:
567
+ logger.info(
568
+ f"Resume: found {len(self._completed_keys)} already-processed keys, will skip them"
569
+ )
570
+
571
+ @property
572
+ def kind(self) -> str:
573
+ return "bucket"
574
+
575
+ def already_done(self) -> set:
576
+ return self._completed_keys
577
+
578
+ # --- internal helpers ---
579
+
580
+ def _shard_path(self, idx: int) -> str:
581
+ return self._join(self.SHARD_PATTERN.format(idx))
582
+
583
+ def _join(self, name: str) -> str:
584
+ return f"{self.prefix}/{name}".lstrip("/") if self.prefix else name
585
+
586
+ def _list_existing_shards(self) -> List[str]:
587
+ try:
588
+ tree = self._api.list_bucket_tree(
589
+ self.bucket_id, prefix=self.prefix or None, recursive=True
590
+ )
591
+ except Exception:
592
+ return []
593
+ shards: List[str] = []
594
+ for item in tree:
595
+ path = getattr(item, "path", None)
596
+ ftype = getattr(item, "type", None)
597
+ if not path or ftype not in (None, "file"):
598
+ continue
599
+ base = Path(path).name
600
+ if base.startswith("shard-") and base.endswith(".parquet"):
601
+ shards.append(path)
602
+ return sorted(shards)
603
+
604
+ def _discover_next_shard_idx(self) -> int:
605
+ shards = self._list_existing_shards()
606
+ max_idx = -1
607
+ for s in shards:
608
+ stem = Path(s).stem # shard-00007
609
+ try:
610
+ max_idx = max(max_idx, int(stem.split("-")[-1]))
611
+ except ValueError:
612
+ continue
613
+ return max_idx + 1
614
+
615
+ def _discover_completed_keys(self) -> set:
616
+ import pyarrow.parquet as pq
617
+
618
+ keys: set = set()
619
+ for shard_path in self._list_existing_shards():
620
+ full = f"{BUCKET_PREFIX}{self.bucket_id}/{shard_path}"
621
+ try:
622
+ with self._fs.open(full, "rb") as f:
623
+ table = pq.read_table(f, columns=["__source_key"])
624
+ keys.update(table.column("__source_key").to_pylist())
625
+ except Exception as e:
626
+ logger.warning(f"Could not read keys from {shard_path}: {e}")
627
+ return keys
628
+
629
+ def _flush(self) -> None:
630
+ if not self._buffer:
631
+ return
632
+ import pyarrow as pa
633
+ import pyarrow.parquet as pq
634
+
635
+ # Build a stable schema. Skip the image column if not requested.
636
+ columns = ["__source_key", "layout"]
637
+ if self.include_images:
638
+ columns.append("__image_bytes")
639
+ # Carry through any extra string-coercible fields (e.g. __source_path).
640
+ extra_keys = sorted(
641
+ {k for row in self._buffer for k in row.keys() if k not in columns}
642
+ )
643
+ columns.extend(extra_keys)
644
+
645
+ table_dict = {c: [row.get(c) for row in self._buffer] for c in columns}
646
+ # pyarrow infers types from python objects; strings/bytes/lists handled fine.
647
+ table = pa.Table.from_pydict(table_dict)
648
+
649
+ buf = io.BytesIO()
650
+ pq.write_table(table, buf, compression="zstd")
651
+ data = buf.getvalue()
652
+
653
+ shard_remote = self._shard_path(self._next_shard_idx)
654
+ logger.info(
655
+ f"Writing shard {self._next_shard_idx} ({len(self._buffer)} rows, "
656
+ f"{len(data) / 1024 / 1024:.1f} MiB) to {shard_remote}"
657
+ )
658
+ self._api.batch_bucket_files(
659
+ self.bucket_id, add=[(data, shard_remote)], token=self.hf_token
660
+ )
661
+ self._next_shard_idx += 1
662
+ self._buffer.clear()
663
+
664
+ def write(self, key: str, layout: List[Dict[str, Any]], extras: Dict[str, Any]) -> None:
665
+ row: Dict[str, Any] = {
666
+ "__source_key": key,
667
+ "layout": json.dumps(layout, ensure_ascii=False),
668
+ }
669
+ if self.include_images and "__image_bytes" in extras:
670
+ row["__image_bytes"] = extras["__image_bytes"]
671
+ # Pass through string/numeric extras (skip raw PIL Image objects which
672
+ # the dataset source never injects directly into extras anyway).
673
+ for k, v in extras.items():
674
+ if k in row or k == "__image_bytes":
675
+ continue
676
+ if isinstance(v, (str, int, float, bool)) or v is None:
677
+ row[k] = v
678
+ self._buffer.append(row)
679
+ if len(self._buffer) >= self.shard_size:
680
+ self._flush()
681
+
682
+ def finalize(self, model_id: str, args_dict: Dict[str, Any]) -> None:
683
+ # Flush trailing rows.
684
+ self._flush()
685
+ # Write/update the metadata file alongside the shards.
686
+ meta = {
687
+ "model_id": model_id,
688
+ "model_name": args_dict["model_name"],
689
+ "task_mode": "layout-detection",
690
+ "source": self.source_id,
691
+ "threshold": args_dict["threshold"],
692
+ "layout_nms": args_dict["layout_nms"],
693
+ "shard_size": args_dict["shard_size"],
694
+ "include_images": self.include_images,
695
+ "last_run_at": datetime.now(timezone.utc).isoformat(),
696
+ "processing_time": args_dict.get("processing_time"),
697
+ }
698
+ meta_bytes = json.dumps(meta, indent=2).encode("utf-8")
699
+ meta_path = self._join(self.METADATA_FILE)
700
+ self._api.batch_bucket_files(
701
+ self.bucket_id, add=[(meta_bytes, meta_path)], token=self.hf_token
702
+ )
703
+ logger.info(
704
+ f"Done: https://huggingface.co/buckets/{self.bucket_id}"
705
+ + (f"/{self.prefix}" if self.prefix else "")
706
+ )
707
+
708
+
709
+ # ---------------------------------------------------------------------------
710
+ # inference_info + dataset card
711
+ # ---------------------------------------------------------------------------
712
+
713
+
714
+ def build_inference_entry(model_id: str, args_dict: Dict[str, Any]) -> Dict[str, Any]:
715
+ return {
716
+ "model_id": "PaddlePaddle/" + args_dict["model_name"],
717
+ "model_name": args_dict["model_name"],
718
+ "model_size": MODEL_SIZES.get(args_dict["model_name"], "unknown"),
719
+ "task_mode": "layout-detection",
720
+ "column_name": "layout",
721
+ "timestamp": datetime.now(timezone.utc).isoformat(),
722
+ "threshold": args_dict["threshold"],
723
+ "layout_nms": args_dict["layout_nms"],
724
+ "backend": "paddleocr",
725
+ }
726
+
727
+
728
+ def create_dataset_card(
729
+ source: str,
730
+ model_name: str,
731
+ num_samples: int,
732
+ processing_time: str,
733
+ output_column: str,
734
+ threshold: float,
735
+ layout_nms: bool,
736
+ ) -> str:
737
+ """Render the dataset card markdown for the dataset-repo sink."""
738
+ if is_bucket_url(source):
739
+ source_link = f"[{source}]({source})"
740
+ else:
741
+ source_link = f"[{source}](https://huggingface.co/datasets/{source})"
742
+
743
+ return f"""---
744
+ tags:
745
+ - layout-detection
746
+ - document-processing
747
+ - paddleocr
748
+ - pp-doclayout
749
+ - uv-script
750
+ - generated
751
+ viewer: false
752
+ ---
753
+
754
+ # Layout detection with {model_name}
755
+
756
+ Bounding-box layout predictions for images from {source_link}, produced by
757
+ PaddleOCR's [{model_name}](https://huggingface.co/PaddlePaddle/{model_name}).
758
+
759
+ ## Processing details
760
+
761
+ - **Source**: {source_link}
762
+ - **Model**: PaddlePaddle/{model_name} ({MODEL_SIZES.get(model_name, "unknown")})
763
+ - **Samples**: {num_samples:,}
764
+ - **Processing time**: {processing_time}
765
+ - **Processing date**: {datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M UTC")}
766
+ - **Confidence threshold**: {threshold}
767
+ - **Layout NMS**: {"on" if layout_nms else "off"}
768
+ - **Output column**: `{output_column}` (JSON-encoded list of detections)
769
+
770
+ ## Schema
771
+
772
+ Each row contains the original columns plus:
773
+
774
+ - `{output_column}`: JSON string. List of detections:
775
+ ```json
776
+ [
777
+ {{"bbox": [x1, y1, x2, y2], "label": "text", "score": 0.97, "cls_id": 2}},
778
+ {{"bbox": [x1, y1, x2, y2], "label": "table", "score": 0.92, "cls_id": 5}}
779
+ ]
780
+ ```
781
+ Coordinates are in **original input-image pixel space** (top-left origin,
782
+ `[xmin, ymin, xmax, ymax]`).
783
+ - `inference_info`: JSON list tracking every model that has been applied to
784
+ this dataset (appended on each run).
785
+
786
+ ## Usage
787
+
788
+ ```python
789
+ import json
790
+ from datasets import load_dataset
791
+
792
+ ds = load_dataset("{{output_dataset_id}}", split="train")
793
+ detections = json.loads(ds[0]["{output_column}"])
794
+ for det in detections:
795
+ print(det["label"], det["score"], det["bbox"])
796
+ ```
797
+
798
+ ## Reproduction
799
+
800
+ ```bash
801
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN \\
802
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
803
+ {source} <output> --model-name {model_name}
804
+ ```
805
+
806
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts).
807
+ """
808
+
809
+
810
+ # ---------------------------------------------------------------------------
811
+ # Main
812
+ # ---------------------------------------------------------------------------
813
+
814
+
815
+ def resolve_device(device: str) -> str:
816
+ if device == "gpu":
817
+ try:
818
+ import paddle # noqa: F401
819
+
820
+ if paddle.device.is_compiled_with_cuda() and paddle.device.cuda.device_count() > 0:
821
+ logger.info(
822
+ f"GPU available: {paddle.device.cuda.device_count()} device(s)"
823
+ )
824
+ return "gpu"
825
+ logger.warning("No CUDA GPU detected; falling back to CPU.")
826
+ return "cpu"
827
+ except Exception as e:
828
+ logger.warning(f"GPU check failed ({e}); falling back to CPU.")
829
+ return "cpu"
830
+ return device
831
+
832
+
833
+ def main(args: argparse.Namespace) -> None:
834
+ from huggingface_hub import login
835
+
836
+ start_time = datetime.now()
837
+ hf_token = args.hf_token or os.environ.get("HF_TOKEN")
838
+ if hf_token:
839
+ login(token=hf_token)
840
+
841
+ device = resolve_device(args.device)
842
+
843
+ # ---------- source ----------
844
+ original_dataset = None
845
+ if is_bucket_url(args.input_source):
846
+ src_iter, total = iter_bucket_images(
847
+ args.input_source,
848
+ shuffle=args.shuffle,
849
+ seed=args.seed,
850
+ max_samples=args.max_samples,
851
+ hf_token=hf_token,
852
+ output_url=args.output_target,
853
+ )
854
+ else:
855
+ src_iter, total, original_dataset = iter_dataset_images(
856
+ args.input_source,
857
+ image_column=args.image_column,
858
+ split=args.split,
859
+ shuffle=args.shuffle,
860
+ seed=args.seed,
861
+ max_samples=args.max_samples,
862
+ )
863
+
864
+ # ---------- sink ----------
865
+ if is_bucket_url(args.output_target):
866
+ sink: Union[BucketShardSink, DatasetRepoSink] = BucketShardSink(
867
+ args.output_target,
868
+ hf_token=hf_token,
869
+ shard_size=args.shard_size,
870
+ include_images=args.include_images,
871
+ resume=not args.no_resume,
872
+ source_id=args.input_source,
873
+ )
874
+ else:
875
+ sink = DatasetRepoSink(
876
+ args.output_target,
877
+ hf_token=hf_token,
878
+ private=args.private,
879
+ config=args.config,
880
+ create_pr=args.create_pr,
881
+ source_id=args.input_source,
882
+ original_dataset=original_dataset,
883
+ )
884
+
885
+ completed = sink.already_done()
886
+
887
+ # ---------- model ----------
888
+ if args.model_name not in VALID_MODELS:
889
+ raise ValueError(
890
+ f"Invalid model {args.model_name!r}. Choose from: {VALID_MODELS}"
891
+ )
892
+ logger.info(f"Loading PaddleOCR LayoutDetection model: {args.model_name} on {device}")
893
+ # PaddleX gates `import cv2` at module load time on
894
+ # `is_dep_available("opencv-contrib-python")`, which checks
895
+ # `importlib.metadata.version(...)`. We ship `opencv-contrib-python-headless`
896
+ # (same `cv2`, no system libGL.so.1 needed) — but that's a different
897
+ # distribution name, so the gate fails and `cv2` is never bound, causing
898
+ # NameErrors deep inside paddlex modules. Patch the metadata lookup to
899
+ # alias the GUI cv2 distros to the headless variant before importing
900
+ # paddleocr; this lets paddlex's own `import cv2` succeed naturally.
901
+ import importlib.metadata as _metadata
902
+
903
+ _orig_metadata_version = _metadata.version
904
+
905
+ def _patched_metadata_version(dep_name):
906
+ if dep_name in ("opencv-contrib-python", "opencv-python"):
907
+ for headless_alias in (
908
+ "opencv-contrib-python-headless",
909
+ "opencv-python-headless",
910
+ ):
911
+ try:
912
+ return _orig_metadata_version(headless_alias)
913
+ except _metadata.PackageNotFoundError:
914
+ continue
915
+ return _orig_metadata_version(dep_name)
916
+
917
+ _metadata.version = _patched_metadata_version
918
+
919
+ from paddleocr import LayoutDetection
920
+
921
+ model = LayoutDetection(model_name=args.model_name, device=device)
922
+
923
+ # ---------- loop ----------
924
+ processed = 0
925
+ skipped = 0
926
+ errors = 0
927
+ pbar = tqdm(src_iter, total=total, desc=f"Layout {args.model_name}")
928
+ for item in pbar:
929
+ if item.key in completed:
930
+ skipped += 1
931
+ continue
932
+ try:
933
+ arr = pil_to_array(item.image)
934
+ results = model.predict(
935
+ arr,
936
+ batch_size=args.batch_size,
937
+ layout_nms=args.layout_nms,
938
+ )
939
+ if not results:
940
+ detections: List[Dict[str, Any]] = []
941
+ else:
942
+ detections = extract_detections(results[0])
943
+ if args.threshold and args.threshold > 0:
944
+ detections = [d for d in detections if d["score"] >= args.threshold]
945
+ except Exception as e:
946
+ logger.error(f"Error on {item.key}: {e}")
947
+ detections = []
948
+ errors += 1
949
+
950
+ extras = dict(item.extras)
951
+ if isinstance(sink, BucketShardSink) and args.include_images:
952
+ buf = io.BytesIO()
953
+ item.image.save(buf, format="PNG")
954
+ extras["__image_bytes"] = buf.getvalue()
955
+
956
+ sink.write(item.key, detections, extras)
957
+ processed += 1
958
+
959
+ duration = datetime.now() - start_time
960
+ processing_time_str = f"{duration.total_seconds() / 60:.2f} min"
961
+ logger.info(
962
+ f"Processed {processed} (skipped {skipped}, errors {errors}) in {processing_time_str}"
963
+ )
964
+
965
+ args_dict = {
966
+ "model_name": args.model_name,
967
+ "threshold": args.threshold,
968
+ "layout_nms": args.layout_nms,
969
+ "shard_size": args.shard_size,
970
+ "processing_time": processing_time_str,
971
+ }
972
+ sink.finalize(model_id=f"PaddlePaddle/{args.model_name}", args_dict=args_dict)
973
+
974
+ if args.verbose:
975
+ import importlib.metadata
976
+
977
+ logger.info("--- Resolved package versions ---")
978
+ for pkg in [
979
+ "paddlepaddle",
980
+ "paddlepaddle-gpu",
981
+ "paddleocr",
982
+ "huggingface-hub",
983
+ "datasets",
984
+ "pyarrow",
985
+ "pillow",
986
+ "numpy",
987
+ ]:
988
+ try:
989
+ logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
990
+ except importlib.metadata.PackageNotFoundError:
991
+ logger.info(f" {pkg}: not installed")
992
+ logger.info("--- End versions ---")
993
+
994
+
995
+ # ---------------------------------------------------------------------------
996
+ # CLI
997
+ # ---------------------------------------------------------------------------
998
+
999
+
1000
+ def _print_usage_banner() -> None:
1001
+ print("=" * 80)
1002
+ print("PP-DocLayout layout detection")
1003
+ print("=" * 80)
1004
+ print(
1005
+ "\nDetect document layout regions (text/title/table/figure/formula/...)"
1006
+ )
1007
+ print("with PaddleOCR's PP-DocLayout-L (or M / S / plus-L variant).")
1008
+ print("\nModels:")
1009
+ for m in VALID_MODELS:
1010
+ print(f" {m:24s} {MODEL_SIZES.get(m, '')}")
1011
+ print("\nSources:")
1012
+ print(" - HF dataset repo: namespace/dataset")
1013
+ print(" - HF bucket of images: hf://buckets/namespace/bucket[/prefix]")
1014
+ print("\nSinks:")
1015
+ print(" - HF dataset repo (one push + dataset card):")
1016
+ print(" namespace/dataset")
1017
+ print(" - HF bucket (incremental shards, resumable):")
1018
+ print(" hf://buckets/namespace/bucket/run-name")
1019
+ print("\nExamples:")
1020
+ print("\n # Smoke test on L4 (dataset -> dataset)")
1021
+ print(" hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\")
1022
+ print(" davanstrien/ufo-ColPali pp-doclayout-smoke \\")
1023
+ print(" --max-samples 3 --shuffle --seed 42 --private")
1024
+ print("\n # Dataset -> bucket (incremental shards)")
1025
+ print(
1026
+ " hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\"
1027
+ )
1028
+ print(" davanstrien/ufo-ColPali \\")
1029
+ print(
1030
+ " hf://buckets/davanstrien/pp-doclayout-scratch/run1 \\"
1031
+ )
1032
+ print(" --max-samples 20 --shard-size 5")
1033
+ print("\n # Bucket of images -> dataset")
1034
+ print(
1035
+ " hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\"
1036
+ )
1037
+ print(
1038
+ " hf://buckets/davanstrien/pp-doclayout-images \\"
1039
+ )
1040
+ print(" pp-doclayout-from-bucket --private")
1041
+ print("\nFor full help, run: uv run pp-doclayout.py --help")
1042
+ print("=" * 80)
1043
+
1044
+
1045
+ def build_parser() -> argparse.ArgumentParser:
1046
+ p = argparse.ArgumentParser(
1047
+ description="PP-DocLayout layout detection over an HF dataset or bucket.",
1048
+ formatter_class=argparse.RawDescriptionHelpFormatter,
1049
+ )
1050
+ p.add_argument(
1051
+ "input_source",
1052
+ help="HF dataset id (namespace/dataset) OR hf://buckets/ns/bucket[/prefix]",
1053
+ )
1054
+ p.add_argument(
1055
+ "output_target",
1056
+ help="HF dataset id (namespace/dataset) OR hf://buckets/ns/bucket/run-name",
1057
+ )
1058
+ p.add_argument(
1059
+ "--model-name",
1060
+ default="PP-DocLayout-L",
1061
+ choices=VALID_MODELS,
1062
+ help="PaddleOCR layout model variant (default: PP-DocLayout-L)",
1063
+ )
1064
+ p.add_argument(
1065
+ "--device",
1066
+ default="gpu",
1067
+ choices=["gpu", "cpu"],
1068
+ help="Device for inference (default: gpu, falls back to cpu if CUDA missing)",
1069
+ )
1070
+ p.add_argument(
1071
+ "--batch-size",
1072
+ type=int,
1073
+ default=1,
1074
+ help="Per-image batch size passed to model.predict (default: 1)",
1075
+ )
1076
+ p.add_argument(
1077
+ "--threshold",
1078
+ type=float,
1079
+ default=0.5,
1080
+ help="Drop detections below this confidence (default: 0.5; 0 disables)",
1081
+ )
1082
+ p.add_argument(
1083
+ "--layout-nms",
1084
+ dest="layout_nms",
1085
+ action="store_true",
1086
+ default=True,
1087
+ help="Enable layout NMS (default: on)",
1088
+ )
1089
+ p.add_argument(
1090
+ "--no-layout-nms",
1091
+ dest="layout_nms",
1092
+ action="store_false",
1093
+ help="Disable layout NMS",
1094
+ )
1095
+ # Dataset-source-specific
1096
+ p.add_argument(
1097
+ "--image-column",
1098
+ default="image",
1099
+ help="Column containing images (dataset-repo source only, default: image)",
1100
+ )
1101
+ p.add_argument(
1102
+ "--split",
1103
+ default="train",
1104
+ help="Dataset split (dataset-repo source only, default: train)",
1105
+ )
1106
+ p.add_argument(
1107
+ "--max-samples", type=int, help="Limit number of samples (for testing)"
1108
+ )
1109
+ p.add_argument(
1110
+ "--shuffle", action="store_true", help="Shuffle source before processing"
1111
+ )
1112
+ p.add_argument(
1113
+ "--seed", type=int, default=42, help="Random seed for shuffle (default: 42)"
1114
+ )
1115
+ # Dataset-sink-specific
1116
+ p.add_argument(
1117
+ "--private", action="store_true", help="Private dataset output (dataset sink only)"
1118
+ )
1119
+ p.add_argument(
1120
+ "--config",
1121
+ help="Config/subset name when pushing to Hub (dataset sink only)",
1122
+ )
1123
+ p.add_argument(
1124
+ "--create-pr",
1125
+ action="store_true",
1126
+ help="Create PR instead of direct push (dataset sink only)",
1127
+ )
1128
+ # Bucket-sink-specific
1129
+ p.add_argument(
1130
+ "--shard-size",
1131
+ type=int,
1132
+ default=256,
1133
+ help="Rows per parquet shard for bucket sink (default: 256)",
1134
+ )
1135
+ p.add_argument(
1136
+ "--include-images",
1137
+ action="store_true",
1138
+ help="Embed source image bytes in bucket output shards (off by default)",
1139
+ )
1140
+ p.add_argument(
1141
+ "--no-resume",
1142
+ action="store_true",
1143
+ help="Disable resume scan when writing to a bucket sink",
1144
+ )
1145
+ # Auth + diagnostics
1146
+ p.add_argument("--hf-token", help="Hugging Face API token (else uses HF_TOKEN env)")
1147
+ p.add_argument(
1148
+ "--verbose",
1149
+ action="store_true",
1150
+ help="Log resolved package versions at the end",
1151
+ )
1152
+ return p
1153
+
1154
+
1155
+ if __name__ == "__main__":
1156
+ if len(sys.argv) == 1:
1157
+ _print_usage_banner()
1158
+ sys.exit(0)
1159
+ main(build_parser().parse_args())