davanstrien HF Staff Claude Sonnet 4.5 (1M context) commited on
Commit
0b3448a
·
1 Parent(s): bef1838

Add LightOnOCR-2-1B support (next-gen OCR with RLVR)

Browse files

Key improvements over v1:
- 7.5× faster: 42.8 vs 5.71 pages/sec on H100
- +7.1% accuracy: 83.2% vs 76.1% on OlmOCR-Bench
- RLVR training eliminates repetition loops
- Simpler: single model (no vocabulary variants)

Critical fix: Removed empty text prefix from message format
to match official LightOnOCR-2 implementation.

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>

Files changed (1) hide show
  1. lighton-ocr2.py +622 -0
lighton-ocr2.py ADDED
@@ -0,0 +1,622 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # "triton-kernels @ git+https://github.com/triton-lang/triton.git@v3.5.0#subdirectory=python/triton_kernels",
12
+ # ]
13
+ #
14
+ # [[tool.uv.index]]
15
+ # url = "https://wheels.vllm.ai/nightly"
16
+ #
17
+ # [tool.uv]
18
+ # prerelease = "allow"
19
+ # ///
20
+
21
+ """
22
+ Convert document images to markdown using LightOnOCR-2 with vLLM.
23
+
24
+ LightOnOCR-2 is a compact 1B multilingual OCR model optimized for production speed.
25
+ Combines Pixtral ViT encoder with Qwen3 language model for efficient document parsing.
26
+ Uses Reinforcement Learning with Verifiable Rewards (RLVR) for improved quality.
27
+
28
+ NOTE: Requires vLLM nightly wheels for LightOnOCR-2 support. First run may take
29
+ a few minutes to download and install dependencies.
30
+
31
+ Features:
32
+ - ⚡ Fastest: 42.8 pages/sec on H100 GPU (7× faster than v1)
33
+ - 🎯 High accuracy: 83.2 ± 0.9% on OlmOCR-Bench (+7.1% vs v1)
34
+ - 🧠 RLVR trained: Eliminates repetition loops and formatting errors
35
+ - 📚 Better training: 2.5× larger dataset with cleaner annotations
36
+ - 🌍 Multilingual with European language optimization
37
+ - 📐 LaTeX formula recognition
38
+ - 📊 Table extraction (markdown format)
39
+ - 📝 Document structure preservation
40
+ - 💪 Production-ready: Outperforms models 9× larger
41
+
42
+ Model: lightonai/LightOnOCR-2-1B
43
+ vLLM: Requires nightly build from main branch
44
+ Performance: 83.2 ± 0.9% on OlmOCR-Bench
45
+ """
46
+
47
+ import argparse
48
+ import base64
49
+ import io
50
+ import json
51
+ import logging
52
+ import os
53
+ import sys
54
+ from typing import Any, Dict, List, Union
55
+ from datetime import datetime
56
+
57
+ import torch
58
+ from datasets import load_dataset
59
+ from huggingface_hub import DatasetCard, login
60
+ from PIL import Image
61
+ from toolz import partition_all
62
+ from tqdm.auto import tqdm
63
+ from vllm import LLM, SamplingParams
64
+
65
+ logging.basicConfig(level=logging.INFO)
66
+ logger = logging.getLogger(__name__)
67
+
68
+
69
+ # LightOnOCR-2 model (single variant)
70
+ MODEL = "lightonai/LightOnOCR-2-1B"
71
+
72
+
73
+ def check_cuda_availability():
74
+ """Check if CUDA is available and exit if not."""
75
+ if not torch.cuda.is_available():
76
+ logger.error("CUDA is not available. This script requires a GPU.")
77
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
78
+ sys.exit(1)
79
+ else:
80
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
81
+
82
+
83
+ def resize_image_to_target(image: Image.Image, target_size: int = 1540) -> Image.Image:
84
+ """
85
+ Resize image so longest dimension is target_size while maintaining aspect ratio.
86
+
87
+ LightOnOCR-2 was trained with images at 1540px max resolution and 200 DPI.
88
+ """
89
+ width, height = image.size
90
+
91
+ # If image is already smaller, don't upscale
92
+ if max(width, height) <= target_size:
93
+ return image
94
+
95
+ # Calculate new dimensions maintaining aspect ratio
96
+ if width > height:
97
+ new_width = target_size
98
+ new_height = int(height * (target_size / width))
99
+ else:
100
+ new_height = target_size
101
+ new_width = int(width * (target_size / height))
102
+
103
+ return image.resize((new_width, new_height), Image.Resampling.LANCZOS)
104
+
105
+
106
+ def make_ocr_message(
107
+ image: Union[Image.Image, Dict[str, Any], str],
108
+ resize: bool = True,
109
+ target_size: int = 1540,
110
+ ) -> List[Dict]:
111
+ """
112
+ Create chat message for OCR processing.
113
+
114
+ LightOnOCR-2 was trained with 1540px max resolution at 200 DPI for optimal results.
115
+ Unlike v1, LightOnOCR-2 does NOT use an empty text prefix - just the image.
116
+ """
117
+ # Convert to PIL Image if needed
118
+ if isinstance(image, Image.Image):
119
+ pil_img = image
120
+ elif isinstance(image, dict) and "bytes" in image:
121
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
122
+ elif isinstance(image, str):
123
+ pil_img = Image.open(image)
124
+ else:
125
+ raise ValueError(f"Unsupported image type: {type(image)}")
126
+
127
+ # Convert to RGB
128
+ pil_img = pil_img.convert("RGB")
129
+
130
+ # Resize to optimal dimensions for LightOnOCR-2
131
+ if resize:
132
+ pil_img = resize_image_to_target(pil_img, target_size)
133
+ logger.debug(f"Resized image to {pil_img.size}")
134
+
135
+ # Convert to base64 data URI
136
+ buf = io.BytesIO()
137
+ pil_img.save(buf, format="PNG")
138
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
139
+
140
+ # LightOnOCR-2 uses message format with ONLY the image (no text prefix)
141
+ return [
142
+ {
143
+ "role": "user",
144
+ "content": [
145
+ {"type": "image_url", "image_url": {"url": data_uri}},
146
+ ],
147
+ }
148
+ ]
149
+
150
+
151
+ def create_dataset_card(
152
+ source_dataset: str,
153
+ model: str,
154
+ num_samples: int,
155
+ processing_time: str,
156
+ batch_size: int,
157
+ max_model_len: int,
158
+ max_tokens: int,
159
+ gpu_memory_utilization: float,
160
+ temperature: float,
161
+ top_p: float,
162
+ target_size: int,
163
+ image_column: str = "image",
164
+ split: str = "train",
165
+ ) -> str:
166
+ """Create a dataset card documenting the OCR process."""
167
+ model_name = model.split("/")[-1]
168
+
169
+ return f"""---
170
+ tags:
171
+ - ocr
172
+ - document-processing
173
+ - lighton-ocr-2
174
+ - markdown
175
+ - uv-script
176
+ - generated
177
+ ---
178
+
179
+ # Document OCR using {model_name}
180
+
181
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using LightOnOCR-2, a fast and compact 1B OCR model trained with RLVR.
182
+
183
+ ## Processing Details
184
+
185
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
186
+ - **Model**: [{model}](https://huggingface.co/{model})
187
+ - **Number of Samples**: {num_samples:,}
188
+ - **Processing Time**: {processing_time}
189
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
190
+
191
+ ### Configuration
192
+
193
+ - **Image Column**: `{image_column}`
194
+ - **Output Column**: `markdown`
195
+ - **Dataset Split**: `{split}`
196
+ - **Batch Size**: {batch_size}
197
+ - **Target Image Size**: {target_size}px (longest dimension)
198
+ - **Max Model Length**: {max_model_len:,} tokens
199
+ - **Max Output Tokens**: {max_tokens:,}
200
+ - **Temperature**: {temperature}
201
+ - **Top P**: {top_p}
202
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
203
+
204
+ ## Model Information
205
+
206
+ LightOnOCR-2 is a next-generation fast, compact OCR model that excels at:
207
+ - ⚡ **Fastest Speed** - 42.8 pages/second on H100 GPU (7× faster than v1)
208
+ - 🎯 **High Accuracy** - 83.2 ± 0.9% on OlmOCR-Bench (+7.1% vs v1)
209
+ - 🧠 **RLVR Training** - Eliminates repetition loops and formatting errors
210
+ - 📚 **Better Dataset** - 2.5× larger training data with cleaner annotations
211
+ - 📐 **LaTeX formulas** - Mathematical notation in LaTeX format
212
+ - 📊 **Tables** - Extracted and formatted as markdown
213
+ - 📝 **Document structure** - Hierarchy and layout preservation
214
+ - 🌍 **Multilingual** - Optimized for European languages
215
+ - 💪 **Production-ready** - Outperforms models 9× larger
216
+
217
+ ### Key Improvements over v1
218
+
219
+ - **7.5× faster**: 42.8 vs 5.71 pages/sec on H100
220
+ - **+7.1% accuracy**: 83.2% vs 76.1% on benchmarks
221
+ - **Better quality**: RLVR training eliminates common OCR errors
222
+ - **Cleaner output**: No repetition loops or formatting glitches
223
+ - **Simpler**: Single model (no vocabulary variants)
224
+
225
+ ## Dataset Structure
226
+
227
+ The dataset contains all original columns plus:
228
+ - `markdown`: The extracted text in markdown format with LaTeX formulas
229
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
230
+
231
+ ## Usage
232
+
233
+ ```python
234
+ from datasets import load_dataset
235
+ import json
236
+
237
+ # Load the dataset
238
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
239
+
240
+ # Access the markdown text
241
+ for example in dataset:
242
+ print(example["markdown"])
243
+ break
244
+
245
+ # View all OCR models applied to this dataset
246
+ inference_info = json.loads(dataset[0]["inference_info"])
247
+ for info in inference_info:
248
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
249
+ ```
250
+
251
+ ## Reproduction
252
+
253
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) LightOnOCR-2 script:
254
+
255
+ ```bash
256
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \\
257
+ {source_dataset} \\
258
+ <output-dataset> \\
259
+ --image-column {image_column} \\
260
+ --batch-size {batch_size}
261
+ ```
262
+
263
+ ## Performance
264
+
265
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.2f} images/second
266
+ - **Benchmark Score**: 83.2 ± 0.9% on OlmOCR-Bench
267
+ - **Training**: RLVR (Reinforcement Learning with Verifiable Rewards)
268
+
269
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
270
+ """
271
+
272
+
273
+ def main(
274
+ input_dataset: str,
275
+ output_dataset: str,
276
+ image_column: str = "image",
277
+ batch_size: int = 16,
278
+ max_model_len: int = 8192,
279
+ max_tokens: int = 6144,
280
+ temperature: float = 0.2,
281
+ top_p: float = 0.9,
282
+ gpu_memory_utilization: float = 0.8,
283
+ target_size: int = 1540,
284
+ no_resize: bool = False,
285
+ hf_token: str = None,
286
+ split: str = "train",
287
+ max_samples: int = None,
288
+ private: bool = False,
289
+ shuffle: bool = False,
290
+ seed: int = 42,
291
+ output_column: str = "markdown",
292
+ ):
293
+ """Process images from HF dataset through LightOnOCR-2 model."""
294
+
295
+ # Check CUDA availability first
296
+ check_cuda_availability()
297
+
298
+ # Track processing start time
299
+ start_time = datetime.now()
300
+
301
+ # Enable HF_TRANSFER for faster downloads
302
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
303
+
304
+ # Login to HF if token provided
305
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
306
+ if HF_TOKEN:
307
+ login(token=HF_TOKEN)
308
+
309
+ logger.info(f"Using model: {MODEL}")
310
+
311
+ # Load dataset
312
+ logger.info(f"Loading dataset: {input_dataset}")
313
+ dataset = load_dataset(input_dataset, split=split)
314
+
315
+ # Validate image column
316
+ if image_column not in dataset.column_names:
317
+ raise ValueError(
318
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
319
+ )
320
+
321
+ # Shuffle if requested
322
+ if shuffle:
323
+ logger.info(f"Shuffling dataset with seed {seed}")
324
+ dataset = dataset.shuffle(seed=seed)
325
+
326
+ # Limit samples if requested
327
+ if max_samples:
328
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
329
+ logger.info(f"Limited to {len(dataset)} samples")
330
+
331
+ # Initialize vLLM model
332
+ logger.info(f"Initializing vLLM with LightOnOCR-2")
333
+ logger.info("This may take a few minutes on first run...")
334
+ llm = LLM(
335
+ model=MODEL,
336
+ trust_remote_code=True,
337
+ max_model_len=max_model_len,
338
+ gpu_memory_utilization=gpu_memory_utilization,
339
+ limit_mm_per_prompt={"image": 1}, # One image per prompt
340
+ enforce_eager=False, # Use torch.compile for better performance
341
+ )
342
+
343
+ # LightOnOCR-2 recommended sampling parameters
344
+ sampling_params = SamplingParams(
345
+ temperature=temperature,
346
+ top_p=top_p,
347
+ max_tokens=max_tokens,
348
+ )
349
+
350
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
351
+ logger.info(f"Output will be written to column: {output_column}")
352
+ if not no_resize:
353
+ logger.info(f"Images will be resized to {target_size}px (longest dimension)")
354
+
355
+ # Process images in batches
356
+ all_outputs = []
357
+
358
+ for batch_indices in tqdm(
359
+ partition_all(batch_size, range(len(dataset))),
360
+ total=(len(dataset) + batch_size - 1) // batch_size,
361
+ desc="LightOnOCR-2 processing",
362
+ ):
363
+ batch_indices = list(batch_indices)
364
+ batch_images = [dataset[i][image_column] for i in batch_indices]
365
+
366
+ try:
367
+ # Create messages for batch
368
+ batch_messages = [
369
+ make_ocr_message(img, resize=not no_resize, target_size=target_size)
370
+ for img in batch_images
371
+ ]
372
+
373
+ # Process with vLLM
374
+ outputs = llm.chat(batch_messages, sampling_params)
375
+
376
+ # Extract outputs
377
+ for output in outputs:
378
+ text = output.outputs[0].text.strip()
379
+ all_outputs.append(text)
380
+
381
+ except Exception as e:
382
+ logger.error(f"Error processing batch: {e}")
383
+ # Add error placeholders for failed batch
384
+ all_outputs.extend(["[OCR ERROR]"] * len(batch_images))
385
+
386
+ # Calculate processing time
387
+ processing_duration = datetime.now() - start_time
388
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
389
+
390
+ # Add output column to dataset
391
+ logger.info(f"Adding '{output_column}' column to dataset")
392
+ dataset = dataset.add_column(output_column, all_outputs)
393
+
394
+ # Handle inference_info tracking (for multi-model comparisons)
395
+ inference_entry = {
396
+ "model_id": MODEL,
397
+ "model_name": "LightOnOCR-2",
398
+ "column_name": output_column,
399
+ "timestamp": datetime.now().isoformat(),
400
+ "temperature": temperature,
401
+ "top_p": top_p,
402
+ "max_tokens": max_tokens,
403
+ "target_size": target_size if not no_resize else "original",
404
+ }
405
+
406
+ if "inference_info" in dataset.column_names:
407
+ # Append to existing inference info
408
+ logger.info("Updating existing inference_info column")
409
+
410
+ def update_inference_info(example):
411
+ try:
412
+ existing_info = json.loads(example["inference_info"]) if example["inference_info"] else []
413
+ except (json.JSONDecodeError, TypeError):
414
+ existing_info = []
415
+
416
+ existing_info.append(inference_entry)
417
+ return {"inference_info": json.dumps(existing_info)}
418
+
419
+ dataset = dataset.map(update_inference_info)
420
+ else:
421
+ # Create new inference_info column
422
+ logger.info("Creating new inference_info column")
423
+ inference_list = [json.dumps([inference_entry])] * len(dataset)
424
+ dataset = dataset.add_column("inference_info", inference_list)
425
+
426
+ # Push to hub
427
+ logger.info(f"Pushing to {output_dataset}")
428
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
429
+
430
+ # Create and push dataset card
431
+ logger.info("Creating dataset card")
432
+ card_content = create_dataset_card(
433
+ source_dataset=input_dataset,
434
+ model=MODEL,
435
+ num_samples=len(dataset),
436
+ processing_time=processing_time_str,
437
+ batch_size=batch_size,
438
+ max_model_len=max_model_len,
439
+ max_tokens=max_tokens,
440
+ gpu_memory_utilization=gpu_memory_utilization,
441
+ temperature=temperature,
442
+ top_p=top_p,
443
+ target_size=target_size,
444
+ image_column=image_column,
445
+ split=split,
446
+ )
447
+
448
+ card = DatasetCard(card_content)
449
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
450
+
451
+ logger.info("✅ LightOnOCR-2 processing complete!")
452
+ logger.info(f"Dataset available at: https://huggingface.co/datasets/{output_dataset}")
453
+ logger.info(f"Processing time: {processing_time_str}")
454
+ logger.info(f"Processing speed: {len(dataset) / processing_duration.total_seconds():.2f} images/sec")
455
+
456
+
457
+ if __name__ == "__main__":
458
+ # Show example usage if no arguments
459
+ if len(sys.argv) == 1:
460
+ print("=" * 80)
461
+ print("LightOnOCR-2 Document Processing")
462
+ print("=" * 80)
463
+ print("\nNext-generation 1B OCR model with RLVR training")
464
+ print("\nFeatures:")
465
+ print("- ⚡ Fastest processing: 42.8 pages/sec on H100 (7× faster than v1)")
466
+ print("- 🎯 High accuracy: 83.2 ± 0.9% on OlmOCR-Bench (+7.1% vs v1)")
467
+ print("- 🧠 RLVR trained: No repetition loops or formatting errors")
468
+ print("- 📚 Better training: 2.5× larger dataset with cleaner annotations")
469
+ print("- 🌍 Multilingual with European language optimization")
470
+ print("- 📐 LaTeX formula recognition")
471
+ print("- 📊 Table extraction (markdown format)")
472
+ print("- 💪 Production-ready: Outperforms models 9× larger")
473
+ print("\nExample usage:")
474
+ print("\n1. Basic OCR:")
475
+ print(" uv run lighton-ocr2.py input-dataset output-dataset")
476
+ print("\n2. Custom batch size for performance:")
477
+ print(" uv run lighton-ocr2.py docs results --batch-size 32")
478
+ print("\n3. Test with small sample:")
479
+ print(" uv run lighton-ocr2.py large-dataset test --max-samples 50 --shuffle")
480
+ print("\n4. Original image size (no resize):")
481
+ print(" uv run lighton-ocr2.py docs output --no-resize")
482
+ print("\n5. Running on HF Jobs:")
483
+ print(" hf jobs uv run --flavor l4x1 \\")
484
+ print(" -e HF_TOKEN=$(python3 -c \"from huggingface_hub import get_token; print(get_token())\") \\")
485
+ print(" -e HF_HUB_ENABLE_HF_TRANSFER=1 \\")
486
+ print(" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \\")
487
+ print(" input-dataset output-dataset --batch-size 32")
488
+ print("\n" + "=" * 80)
489
+ print("\nKey Improvements over v1:")
490
+ print(" - 7.5× faster processing speed")
491
+ print(" - 7.1% higher accuracy on benchmarks")
492
+ print(" - Eliminates repetition loops and formatting errors")
493
+ print(" - Simpler: single model (no vocabulary variants)")
494
+ print("\nFor full help, run: uv run lighton-ocr2.py --help")
495
+ sys.exit(0)
496
+
497
+ parser = argparse.ArgumentParser(
498
+ description="Document OCR using LightOnOCR-2 (next-gen 1B model with RLVR)",
499
+ formatter_class=argparse.RawDescriptionHelpFormatter,
500
+ epilog="""
501
+ Key Improvements over v1:
502
+ - 7.5× faster: 42.8 vs 5.71 pages/sec on H100
503
+ - +7.1% accuracy: 83.2% vs 76.1% on benchmarks
504
+ - Better quality: RLVR training eliminates repetition loops
505
+ - Cleaner output: No formatting glitches
506
+ - Simpler: Single model (no vocabulary variants)
507
+
508
+ Examples:
509
+ # Basic text OCR
510
+ uv run lighton-ocr2.py my-docs analyzed-docs
511
+
512
+ # Test with random sampling
513
+ uv run lighton-ocr2.py large-dataset test --max-samples 50 --shuffle
514
+
515
+ # Custom batch size for GPU optimization
516
+ uv run lighton-ocr2.py dataset output --batch-size 32 --gpu-memory-utilization 0.9
517
+ """,
518
+ )
519
+
520
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
521
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
522
+ parser.add_argument(
523
+ "--image-column",
524
+ default="image",
525
+ help="Column containing images (default: image)",
526
+ )
527
+ parser.add_argument(
528
+ "--batch-size",
529
+ type=int,
530
+ default=16,
531
+ help="Batch size for processing (default: 16)",
532
+ )
533
+ parser.add_argument(
534
+ "--max-model-len",
535
+ type=int,
536
+ default=8192,
537
+ help="Maximum model context length (default: 8192)",
538
+ )
539
+ parser.add_argument(
540
+ "--max-tokens",
541
+ type=int,
542
+ default=6144,
543
+ help="Maximum tokens to generate (default: 6144, trained sequence length)",
544
+ )
545
+ parser.add_argument(
546
+ "--temperature",
547
+ type=float,
548
+ default=0.2,
549
+ help="Sampling temperature (default: 0.2)",
550
+ )
551
+ parser.add_argument(
552
+ "--top-p",
553
+ type=float,
554
+ default=0.9,
555
+ help="Top-p sampling parameter (default: 0.9)",
556
+ )
557
+ parser.add_argument(
558
+ "--gpu-memory-utilization",
559
+ type=float,
560
+ default=0.8,
561
+ help="GPU memory utilization (default: 0.8)",
562
+ )
563
+ parser.add_argument(
564
+ "--target-size",
565
+ type=int,
566
+ default=1540,
567
+ help="Target size for longest image dimension in pixels (default: 1540, matching training)",
568
+ )
569
+ parser.add_argument(
570
+ "--no-resize",
571
+ action="store_true",
572
+ help="Don't resize images (use original size)",
573
+ )
574
+ parser.add_argument("--hf-token", help="Hugging Face API token")
575
+ parser.add_argument(
576
+ "--split", default="train", help="Dataset split to use (default: train)"
577
+ )
578
+ parser.add_argument(
579
+ "--max-samples",
580
+ type=int,
581
+ help="Maximum number of samples to process (for testing)",
582
+ )
583
+ parser.add_argument(
584
+ "--private", action="store_true", help="Make output dataset private"
585
+ )
586
+ parser.add_argument(
587
+ "--shuffle", action="store_true", help="Shuffle dataset before processing"
588
+ )
589
+ parser.add_argument(
590
+ "--seed",
591
+ type=int,
592
+ default=42,
593
+ help="Random seed for shuffling (default: 42)",
594
+ )
595
+ parser.add_argument(
596
+ "--output-column",
597
+ default="markdown",
598
+ help="Column name for output text (default: markdown)",
599
+ )
600
+
601
+ args = parser.parse_args()
602
+
603
+ main(
604
+ input_dataset=args.input_dataset,
605
+ output_dataset=args.output_dataset,
606
+ image_column=args.image_column,
607
+ batch_size=args.batch_size,
608
+ max_model_len=args.max_model_len,
609
+ max_tokens=args.max_tokens,
610
+ temperature=args.temperature,
611
+ top_p=args.top_p,
612
+ gpu_memory_utilization=args.gpu_memory_utilization,
613
+ target_size=args.target_size,
614
+ no_resize=args.no_resize,
615
+ hf_token=args.hf_token,
616
+ split=args.split,
617
+ max_samples=args.max_samples,
618
+ private=args.private,
619
+ shuffle=args.shuffle,
620
+ seed=args.seed,
621
+ output_column=args.output_column,
622
+ )