| # OCR Scripts - Development Notes | |
| ## Active Scripts | |
| ### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`) | |
| ✅ **Production Ready** | |
| - Fully supported by vLLM | |
| - Fast batch processing | |
| - Tested and working on HF Jobs | |
| ### LightOnOCR-2-1B (`lighton-ocr2.py`) | |
| ✅ **Production Ready** (Fixed 2026-01-29) | |
| **Status:** Working with vLLM nightly | |
| **What was fixed:** | |
| - Root cause was NOT vLLM - it was the deprecated `HF_HUB_ENABLE_HF_TRANSFER=1` env var | |
| - The script was setting this env var but `hf_transfer` package no longer exists | |
| - This caused download failures that manifested as "Can't load image processor" errors | |
| - Fix: Removed the `HF_HUB_ENABLE_HF_TRANSFER=1` setting from the script | |
| **Test results (2026-01-29):** | |
| - 10/10 samples processed successfully | |
| - Clean markdown output with proper headers and paragraphs | |
| - Output dataset: `davanstrien/lighton-ocr2-test-v4` | |
| **Example usage:** | |
| ```bash | |
| hf jobs uv run --flavor a100-large \ | |
| -s HF_TOKEN \ | |
| https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \ | |
| davanstrien/ufo-ColPali output-dataset \ | |
| --max-samples 10 --shuffle --seed 42 | |
| ``` | |
| **Model Info:** | |
| - Model: `lightonai/LightOnOCR-2-1B` | |
| - Architecture: Pixtral ViT encoder + Qwen3 LLM | |
| - Training: RLVR (Reinforcement Learning with Verifiable Rewards) | |
| - Performance: 83.2% on OlmOCR-Bench, 42.8 pages/sec on H100 | |
| ### PaddleOCR-VL-1.5 (`paddleocr-vl-1.5.py`) | |
| ✅ **Production Ready** (Added 2026-01-30) | |
| **Status:** Working with transformers | |
| **Note:** Uses transformers backend (not vLLM) because PaddleOCR-VL only supports vLLM in server mode, which doesn't fit the single-command UV script pattern. Images are processed one at a time for stability. | |
| **Test results (2026-01-30):** | |
| - 10/10 samples processed successfully | |
| - Processing time: ~50s per image on L4 GPU | |
| - Output dataset: `davanstrien/paddleocr-vl15-final-test` | |
| **Example usage:** | |
| ```bash | |
| hf jobs uv run --flavor l4x1 \ | |
| -s HF_TOKEN \ | |
| https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \ | |
| davanstrien/ufo-ColPali output-dataset \ | |
| --max-samples 10 --shuffle --seed 42 | |
| ``` | |
| **Task modes:** | |
| - `ocr` (default): General text extraction to markdown | |
| - `table`: Table extraction to HTML format | |
| - `formula`: Mathematical formula recognition to LaTeX | |
| - `chart`: Chart and diagram analysis | |
| - `spotting`: Text spotting with localization (uses higher resolution) | |
| - `seal`: Seal and stamp recognition | |
| **Model Info:** | |
| - Model: `PaddlePaddle/PaddleOCR-VL-1.5` | |
| - Size: 0.9B parameters (ultra-compact) | |
| - Performance: 94.5% SOTA on OmniDocBench v1.5 | |
| - Backend: Transformers (single image processing) | |
| - Requires: `transformers>=5.0.0` | |
| ## Pending Development | |
| ### DeepSeek-OCR-2 (Visual Causal Flow Architecture) | |
| **Status:** ⏳ Waiting for vLLM upstream support | |
| **Context:** | |
| DeepSeek-OCR-2 is the next generation OCR model (3B parameters) with Visual Causal Flow architecture offering improved quality. We attempted to create a UV script (`deepseek-ocr2-vllm.py`) but encountered a blocker. | |
| **Blocker:** | |
| vLLM does not yet support `DeepseekOCR2ForCausalLM` architecture in the official release. | |
| **PR to Watch:** | |
| 🔗 https://github.com/vllm-project/vllm/pull/33165 | |
| This PR adds DeepSeek-OCR-2 support but is currently: | |
| - ⚠️ **Open** (not merged) | |
| - Has unresolved review comments | |
| - Pre-commit checks failing | |
| - Issues: hardcoded parameters, device mismatch bugs, missing error handling | |
| **What's Needed:** | |
| 1. PR #33165 needs to be reviewed, fixed, and merged | |
| 2. vLLM needs to release a version including the merge | |
| 3. Then we can add these dependencies to our script: | |
| ```python | |
| # dependencies = [ | |
| # "datasets>=4.0.0", | |
| # "huggingface-hub", | |
| # "pillow", | |
| # "vllm", | |
| # "tqdm", | |
| # "toolz", | |
| # "torch", | |
| # "addict", | |
| # "matplotlib", | |
| # ] | |
| ``` | |
| **Implementation Progress:** | |
| - ✅ Created `deepseek-ocr2-vllm.py` script | |
| - ✅ Fixed dependency issues (pyarrow, datasets>=4.0.0) | |
| - ✅ Tested script structure on HF Jobs | |
| - ❌ Blocked: vLLM doesn't recognize architecture | |
| **Partial Implementation:** | |
| The file `deepseek-ocr2-vllm.py` exists in this repo but is **not functional** until vLLM support lands. Consider it a draft. | |
| **Testing Evidence:** | |
| When we ran on HF Jobs, we got: | |
| ``` | |
| ValidationError: Model architectures ['DeepseekOCR2ForCausalLM'] are not supported for now. | |
| Supported architectures: [...'DeepseekOCRForCausalLM'...] | |
| ``` | |
| **Next Steps (when PR merges):** | |
| 1. Update `deepseek-ocr2-vllm.py` dependencies to include `addict` and `matplotlib` | |
| 2. Test on HF Jobs with small dataset (10 samples) | |
| 3. Verify output quality | |
| 4. Update README.md with DeepSeek-OCR-2 section | |
| 5. Document v1 vs v2 differences | |
| **Alternative Approaches (if urgent):** | |
| - Create transformers-based script (slower, no vLLM batching) | |
| - Use DeepSeek's official repo setup (complex, not UV-script compatible) | |
| **Model Information:** | |
| - Model ID: `deepseek-ai/DeepSeek-OCR-2` | |
| - Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2 | |
| - GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2 | |
| - Parameters: 3B | |
| - Resolution: (0-6)×768×768 + 1×1024×1024 patches | |
| - Key improvement: Visual Causal Flow architecture | |
| **Resolution Modes (for v2):** | |
| ```python | |
| RESOLUTION_MODES = { | |
| "tiny": {"base_size": 512, "image_size": 512, "crop_mode": False}, | |
| "small": {"base_size": 640, "image_size": 640, "crop_mode": False}, | |
| "base": {"base_size": 1024, "image_size": 768, "crop_mode": False}, # v2 optimized | |
| "large": {"base_size": 1280, "image_size": 1024, "crop_mode": False}, | |
| "gundam": {"base_size": 1024, "image_size": 768, "crop_mode": True}, # v2 optimized | |
| } | |
| ``` | |
| ## Other OCR Scripts | |
| ### Nanonets OCR (`nanonets-ocr.py`, `nanonets-ocr2.py`) | |
| ✅ Both versions working | |
| ### PaddleOCR-VL (`paddleocr-vl.py`) | |
| ✅ Working | |
| --- | |
| ## Future: OCR Smoke Test Dataset | |
| **Status:** Idea (noted 2026-02-12) | |
| Build a small curated dataset (`uv-scripts/ocr-smoke-test`?) with ~2-5 samples from diverse sources. Purpose: fast CI-style verification that scripts still work after dep updates, without downloading full datasets. | |
| **Design goals:** | |
| - Tiny (~20-30 images total) so download is seconds not minutes | |
| - Covers the axes that break things: document type, image quality, language, layout complexity | |
| - Has ground truth text where possible for quality regression checks | |
| - All permissively licensed (CC0/CC-BY preferred) | |
| **Candidate sources:** | |
| | Source | What it covers | Why | | |
| |--------|---------------|-----| | |
| | `NationalLibraryOfScotland/medical-history-of-british-india` | Historical English, degraded scans | Has hand-corrected `text` column for comparison. CC0. Already tested with GLM-OCR. | | |
| | `davanstrien/ufo-ColPali` | Mixed modern documents | Already used as our go-to test set. Varied layouts. | | |
| | Something with **tables** | Structured data extraction | Tests `--task table` modes. Maybe a financial report or census page. | | |
| | Something with **formulas/LaTeX** | Math notation | Tests `--task formula`. arXiv pages or textbook scans. | | |
| | Something **multilingual** (CJK, Arabic, etc.) | Non-Latin scripts | GLM-OCR claims zh/ja/ko support. Good to verify. | | |
| | Something **handwritten** | Handwriting recognition | Edge case that reveals model limits. | | |
| **How it would work:** | |
| ```bash | |
| # Quick smoke test for any script | |
| uv run glm-ocr.py uv-scripts/ocr-smoke-test smoke-out --max-samples 5 | |
| # Or a dedicated test runner that checks all scripts against it | |
| ``` | |
| **Open questions:** | |
| - Build as a proper HF dataset, or just a folder of images in the repo? | |
| - Should we include expected output for regression testing (fragile if models change)? | |
| - Could we add a `--smoke-test` flag to each script that auto-uses this dataset? | |
| - Worth adding to HF Jobs scheduled runs for ongoing monitoring? | |
| --- | |
| **Last Updated:** 2026-02-12 | |
| **Watch PRs:** | |
| - DeepSeek-OCR-2: https://github.com/vllm-project/vllm/pull/33165 | |