Commit ·
e1bde4f
1
Parent(s): 6cae11c
Restructure README: cleaner intro, TOC table, common options
Browse files- Punchier intro with model count (13 models, 0.9B-8B)
- Quick start example switched to GLM-OCR
- Added collapsible TOC table with all scripts sorted by model size
- Moved common options section up with --help tip
- Added GLM-OCR usage example (OCR, table, formula modes)
- Collapsed detailed per-model docs into expandable section
- Stripped most emoji from headings
- Fixed stale PaddleOCR SOTA claim (GLM-OCR now scores higher)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
README.md
CHANGED
|
@@ -7,9 +7,9 @@ tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
|
|
| 7 |
|
| 8 |
> Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
## 🚀 Quick Start
|
| 13 |
|
| 14 |
Run OCR on any dataset without needing your own GPU:
|
| 15 |
|
|
@@ -17,25 +17,97 @@ Run OCR on any dataset without needing your own GPU:
|
|
| 17 |
# Quick test with 10 samples
|
| 18 |
hf jobs uv run --flavor l4x1 \
|
| 19 |
--secrets HF_TOKEN \
|
| 20 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/
|
| 21 |
your-input-dataset your-output-dataset \
|
| 22 |
--max-samples 10
|
| 23 |
```
|
| 24 |
|
| 25 |
That's it! The script will:
|
| 26 |
|
| 27 |
-
-
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
-
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
-
|
|
|
|
|
|
|
| 39 |
- 🧩 **Ultra-compact** - Only 0.9B parameters
|
| 40 |
- 📝 **OCR mode** - General text extraction to markdown
|
| 41 |
- 📊 **Table mode** - HTML table recognition
|
|
@@ -549,70 +621,6 @@ uv run nanonets-ocr2.py documents ocr-results
|
|
| 549 |
|
| 550 |
```
|
| 551 |
|
| 552 |
-
|
| 553 |
-
|
| 554 |
-
Any HuggingFace dataset containing images - documents, forms, receipts, books, handwriting.
|
| 555 |
-
|
| 556 |
-
## 🎛️ Configuration Options
|
| 557 |
-
|
| 558 |
-
### Common Options (All Scripts)
|
| 559 |
-
|
| 560 |
-
| Option | Default | Description |
|
| 561 |
-
| -------------------------- | ------------------ | --------------------------------- |
|
| 562 |
-
| `--image-column` | `image` | Column containing images |
|
| 563 |
-
| `--batch-size` | `32`/`16`\* | Images processed together |
|
| 564 |
-
| `--max-model-len` | `8192`/`16384`\*\* | Max context length |
|
| 565 |
-
| `--max-tokens` | `4096`/`8192`\*\* | Max output tokens |
|
| 566 |
-
| `--gpu-memory-utilization` | `0.8` | GPU memory usage (0.0-1.0) |
|
| 567 |
-
| `--split` | `train` | Dataset split to process |
|
| 568 |
-
| `--max-samples` | None | Limit samples (for testing) |
|
| 569 |
-
| `--private` | False | Make output dataset private |
|
| 570 |
-
| `--shuffle` | False | Shuffle dataset before processing |
|
| 571 |
-
| `--seed` | `42` | Random seed for shuffling |
|
| 572 |
-
|
| 573 |
-
\*RolmOCR and DoTS use batch size 16
|
| 574 |
-
\*\*RolmOCR uses 16384/8192
|
| 575 |
-
|
| 576 |
-
### Script-Specific Options
|
| 577 |
-
|
| 578 |
-
**PaddleOCR-VL-1.5**:
|
| 579 |
-
|
| 580 |
-
- `--task-mode`: Task type - `ocr` (default), `table`, `formula`, `chart`, `spotting`, or `seal`
|
| 581 |
-
- `--output-column`: Override default column name (default: `paddleocr_1.5_[task_mode]`)
|
| 582 |
-
- SOTA 94.5% accuracy on OmniDocBench v1.5
|
| 583 |
-
- Uses transformers backend (single image processing for stability)
|
| 584 |
-
|
| 585 |
-
**PaddleOCR-VL**:
|
| 586 |
-
|
| 587 |
-
- `--task-mode`: Task type - `ocr` (default), `table`, `formula`, or `chart`
|
| 588 |
-
- `--no-smart-resize`: Disable adaptive resizing (use original image size)
|
| 589 |
-
- `--output-column`: Override default column name (default: `paddleocr_[task_mode]`)
|
| 590 |
-
- Ultra-compact 0.9B model - fastest initialization!
|
| 591 |
-
|
| 592 |
-
**GLM-OCR**:
|
| 593 |
-
|
| 594 |
-
- `--task`: Task type - `ocr` (default), `formula`, or `table`
|
| 595 |
-
- `--repetition-penalty`: Repetition penalty (default: 1.1, from official SDK)
|
| 596 |
-
- Near-greedy sampling by default (temperature=0.01, top_p=0.00001) matching official SDK
|
| 597 |
-
- Requires vLLM nightly + transformers>=5.1.0 (handled automatically)
|
| 598 |
-
|
| 599 |
-
**DeepSeek-OCR**:
|
| 600 |
-
|
| 601 |
-
- `--resolution-mode`: Quality level - `tiny`, `small`, `base`, `large`, or `gundam` (default)
|
| 602 |
-
- `--prompt-mode`: Task type - `document` (default), `image`, `free`, `figure`, or `describe`
|
| 603 |
-
- `--prompt`: Custom OCR prompt (overrides prompt-mode)
|
| 604 |
-
- `--base-size`, `--image-size`, `--crop-mode`: Override resolution mode manually
|
| 605 |
-
- ⚠️ **Important for HF Jobs**: Add `-e UV_TORCH_BACKEND=auto` for proper PyTorch installation
|
| 606 |
-
|
| 607 |
-
**RolmOCR**:
|
| 608 |
-
|
| 609 |
-
- Output column is auto-generated from model name (e.g., `rolmocr_text`)
|
| 610 |
-
- Use `--output-column` to override the default name
|
| 611 |
-
|
| 612 |
-
**DoTS.ocr**:
|
| 613 |
-
|
| 614 |
-
- `--prompt-mode`: Choose `ocr` (default), `layout-all`, or `layout-only`
|
| 615 |
-
- `--custom-prompt`: Override with custom prompt text
|
| 616 |
-
- `--output-column`: Output column name (default: `markdown`)
|
| 617 |
|
| 618 |
-
|
|
|
|
| 7 |
|
| 8 |
> Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
|
| 9 |
|
| 10 |
+
13 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
|
| 11 |
|
| 12 |
+
## 🚀 Quick Start
|
| 13 |
|
| 14 |
Run OCR on any dataset without needing your own GPU:
|
| 15 |
|
|
|
|
| 17 |
# Quick test with 10 samples
|
| 18 |
hf jobs uv run --flavor l4x1 \
|
| 19 |
--secrets HF_TOKEN \
|
| 20 |
+
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
|
| 21 |
your-input-dataset your-output-dataset \
|
| 22 |
--max-samples 10
|
| 23 |
```
|
| 24 |
|
| 25 |
That's it! The script will:
|
| 26 |
|
| 27 |
+
- Process first 10 images from your dataset
|
| 28 |
+
- Add OCR results as a new `markdown` column
|
| 29 |
+
- Push the results to a new dataset
|
| 30 |
+
- View results at: `https://huggingface.co/datasets/[your-output-dataset]`
|
| 31 |
+
|
| 32 |
+
<details><summary>All scripts at a glance (sorted by model size)</summary>
|
| 33 |
+
|
| 34 |
+
| Script | Model | Size | Backend | Notes |
|
| 35 |
+
|--------|-------|------|---------|-------|
|
| 36 |
+
| `smoldocling-ocr.py` | [SmolDocling](https://huggingface.co/ds4sd/SmolDocling-256M-preview) | 256M | Transformers | DocTags structured output |
|
| 37 |
+
| `glm-ocr.py` | [GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) | 0.9B | vLLM | 94.62% OmniDocBench V1.5 |
|
| 38 |
+
| `paddleocr-vl.py` | [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) | 0.9B | Transformers | 4 task modes (ocr/table/formula/chart) |
|
| 39 |
+
| `paddleocr-vl-1.5.py` | [PaddleOCR-VL-1.5](https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5) | 0.9B | Transformers | 94.5% OmniDocBench, 6 task modes |
|
| 40 |
+
| `lighton-ocr.py` | [LightOnOCR-1B](https://huggingface.co/lightonai/LightOnOCR-1B-1025) | 1B | vLLM | Fast, 3 vocab sizes |
|
| 41 |
+
| `lighton-ocr2.py` | [LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) | 1B | vLLM | 7× faster than v1, RLVR trained |
|
| 42 |
+
| `hunyuan-ocr.py` | [HunyuanOCR](https://huggingface.co/tencent/HunyuanOCR) | 1B | vLLM | Lightweight VLM |
|
| 43 |
+
| `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
|
| 44 |
+
| `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
|
| 45 |
+
| `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/Tencent/DoTS.ocr-1.5) | 3B | vLLM | Updated multilingual model |
|
| 46 |
+
| `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
|
| 47 |
+
| `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
|
| 48 |
+
| `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
|
| 49 |
+
| `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
|
| 50 |
+
| `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
|
| 51 |
+
| `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
|
| 52 |
+
| `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
|
| 53 |
+
|
| 54 |
+
</details>
|
| 55 |
+
|
| 56 |
+
## Common Options
|
| 57 |
+
|
| 58 |
+
All scripts accept the same core flags. Model-specific defaults (batch size, context length, temperature) are tuned per model based on model card recommendations and can be overridden.
|
| 59 |
+
|
| 60 |
+
| Option | Description |
|
| 61 |
+
|--------|-------------|
|
| 62 |
+
| `--image-column` | Column containing images (default: `image`) |
|
| 63 |
+
| `--output-column` | Output column name (default: `markdown`) |
|
| 64 |
+
| `--split` | Dataset split (default: `train`) |
|
| 65 |
+
| `--max-samples` | Limit number of samples (useful for testing) |
|
| 66 |
+
| `--private` | Make output dataset private |
|
| 67 |
+
| `--shuffle` | Shuffle dataset before processing |
|
| 68 |
+
| `--seed` | Random seed for shuffling (default: `42`) |
|
| 69 |
+
| `--batch-size` | Images per batch (default varies per model) |
|
| 70 |
+
| `--max-model-len` | Max context length (default varies per model) |
|
| 71 |
+
| `--max-tokens` | Max output tokens (default varies per model) |
|
| 72 |
+
| `--gpu-memory-utilization` | GPU memory fraction (default: `0.8`) |
|
| 73 |
+
| `--config` | Config name for Hub push (for benchmarking) |
|
| 74 |
+
| `--create-pr` | Push as PR instead of direct commit |
|
| 75 |
+
| `--verbose` | Log resolved package versions after run |
|
| 76 |
+
|
| 77 |
+
Every script supports `--help` to see all available options:
|
| 78 |
+
|
| 79 |
+
```bash
|
| 80 |
+
uv run glm-ocr.py --help
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
## Example: GLM-OCR
|
| 84 |
+
|
| 85 |
+
[GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) (0.9B) scores 94.62% on OmniDocBench V1.5 and supports OCR, formula, and table extraction:
|
| 86 |
+
|
| 87 |
+
```bash
|
| 88 |
+
# Basic OCR
|
| 89 |
+
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 90 |
+
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
|
| 91 |
+
my-documents my-ocr-output
|
| 92 |
+
|
| 93 |
+
# Table extraction
|
| 94 |
+
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 95 |
+
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
|
| 96 |
+
my-documents my-tables --task table
|
| 97 |
|
| 98 |
+
# Test on 10 samples first
|
| 99 |
+
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 100 |
+
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
|
| 101 |
+
my-documents my-test --max-samples 10
|
| 102 |
+
```
|
| 103 |
|
| 104 |
+
<details><summary>Detailed per-model documentation</summary>
|
| 105 |
|
| 106 |
+
### PaddleOCR-VL-1.5 (`paddleocr-vl-1.5.py`) — 6 task modes
|
| 107 |
|
| 108 |
+
OCR using [PaddlePaddle/PaddleOCR-VL-1.5](https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5) with 94.5% accuracy:
|
| 109 |
+
|
| 110 |
+
- **94.5% on OmniDocBench v1.5** (0.9B parameters)
|
| 111 |
- 🧩 **Ultra-compact** - Only 0.9B parameters
|
| 112 |
- 📝 **OCR mode** - General text extraction to markdown
|
| 113 |
- 📊 **Table mode** - HTML table recognition
|
|
|
|
| 621 |
|
| 622 |
```
|
| 623 |
|
| 624 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 625 |
|
| 626 |
+
Works with any HuggingFace dataset containing images — documents, forms, receipts, books, handwriting.
|