File size: 7,875 Bytes
4388151
 
 
 
 
 
 
 
 
 
 
0bc9b0a
4388151
0bc9b0a
4388151
0bc9b0a
 
 
 
 
4388151
0bc9b0a
 
 
 
4388151
0bc9b0a
4388151
 
 
 
0bc9b0a
4388151
 
 
 
 
 
 
 
 
0bc9b0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4388151
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bc9b0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4388151
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
# OCR Scripts - Development Notes

## Active Scripts

### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)**Production Ready**
- Fully supported by vLLM
- Fast batch processing
- Tested and working on HF Jobs

### LightOnOCR-2-1B (`lighton-ocr2.py`)**Production Ready** (Fixed 2026-01-29)

**Status:** Working with vLLM nightly

**What was fixed:**
- Root cause was NOT vLLM - it was the deprecated `HF_HUB_ENABLE_HF_TRANSFER=1` env var
- The script was setting this env var but `hf_transfer` package no longer exists
- This caused download failures that manifested as "Can't load image processor" errors
- Fix: Removed the `HF_HUB_ENABLE_HF_TRANSFER=1` setting from the script

**Test results (2026-01-29):**
- 10/10 samples processed successfully
- Clean markdown output with proper headers and paragraphs
- Output dataset: `davanstrien/lighton-ocr2-test-v4`

**Example usage:**
```bash
hf jobs uv run --flavor a100-large \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
    davanstrien/ufo-ColPali output-dataset \
    --max-samples 10 --shuffle --seed 42
```

**Model Info:**
- Model: `lightonai/LightOnOCR-2-1B`
- Architecture: Pixtral ViT encoder + Qwen3 LLM
- Training: RLVR (Reinforcement Learning with Verifiable Rewards)
- Performance: 83.2% on OlmOCR-Bench, 42.8 pages/sec on H100

### PaddleOCR-VL-1.5 (`paddleocr-vl-1.5.py`)**Production Ready** (Added 2026-01-30)

**Status:** Working with transformers

**Note:** Uses transformers backend (not vLLM) because PaddleOCR-VL only supports vLLM in server mode, which doesn't fit the single-command UV script pattern. Images are processed one at a time for stability.

**Test results (2026-01-30):**
- 10/10 samples processed successfully
- Processing time: ~50s per image on L4 GPU
- Output dataset: `davanstrien/paddleocr-vl15-final-test`

**Example usage:**
```bash
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
    davanstrien/ufo-ColPali output-dataset \
    --max-samples 10 --shuffle --seed 42
```

**Task modes:**
- `ocr` (default): General text extraction to markdown
- `table`: Table extraction to HTML format
- `formula`: Mathematical formula recognition to LaTeX
- `chart`: Chart and diagram analysis
- `spotting`: Text spotting with localization (uses higher resolution)
- `seal`: Seal and stamp recognition

**Model Info:**
- Model: `PaddlePaddle/PaddleOCR-VL-1.5`
- Size: 0.9B parameters (ultra-compact)
- Performance: 94.5% SOTA on OmniDocBench v1.5
- Backend: Transformers (single image processing)
- Requires: `transformers>=5.0.0`

## Pending Development

### DeepSeek-OCR-2 (Visual Causal Flow Architecture)

**Status:** ⏳ Waiting for vLLM upstream support

**Context:**
DeepSeek-OCR-2 is the next generation OCR model (3B parameters) with Visual Causal Flow architecture offering improved quality. We attempted to create a UV script (`deepseek-ocr2-vllm.py`) but encountered a blocker.

**Blocker:**
vLLM does not yet support `DeepseekOCR2ForCausalLM` architecture in the official release.

**PR to Watch:**
🔗 https://github.com/vllm-project/vllm/pull/33165

This PR adds DeepSeek-OCR-2 support but is currently:
- ⚠️ **Open** (not merged)
- Has unresolved review comments
- Pre-commit checks failing
- Issues: hardcoded parameters, device mismatch bugs, missing error handling

**What's Needed:**
1. PR #33165 needs to be reviewed, fixed, and merged
2. vLLM needs to release a version including the merge
3. Then we can add these dependencies to our script:
   ```python
   # dependencies = [
   #     "datasets>=4.0.0",
   #     "huggingface-hub",
   #     "pillow",
   #     "vllm",
   #     "tqdm",
   #     "toolz",
   #     "torch",
   #     "addict",
   #     "matplotlib",
   # ]
   ```

**Implementation Progress:**
- ✅ Created `deepseek-ocr2-vllm.py` script
- ✅ Fixed dependency issues (pyarrow, datasets>=4.0.0)
- ✅ Tested script structure on HF Jobs
- ❌ Blocked: vLLM doesn't recognize architecture

**Partial Implementation:**
The file `deepseek-ocr2-vllm.py` exists in this repo but is **not functional** until vLLM support lands. Consider it a draft.

**Testing Evidence:**
When we ran on HF Jobs, we got:
```
ValidationError: Model architectures ['DeepseekOCR2ForCausalLM'] are not supported for now.
Supported architectures: [...'DeepseekOCRForCausalLM'...]
```

**Next Steps (when PR merges):**
1. Update `deepseek-ocr2-vllm.py` dependencies to include `addict` and `matplotlib`
2. Test on HF Jobs with small dataset (10 samples)
3. Verify output quality
4. Update README.md with DeepSeek-OCR-2 section
5. Document v1 vs v2 differences

**Alternative Approaches (if urgent):**
- Create transformers-based script (slower, no vLLM batching)
- Use DeepSeek's official repo setup (complex, not UV-script compatible)

**Model Information:**
- Model ID: `deepseek-ai/DeepSeek-OCR-2`
- Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
- GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
- Parameters: 3B
- Resolution: (0-6)×768×768 + 1×1024×1024 patches
- Key improvement: Visual Causal Flow architecture

**Resolution Modes (for v2):**
```python
RESOLUTION_MODES = {
    "tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
    "small": {"base_size": 640, "image_size": 640, "crop_mode": False},
    "base": {"base_size": 1024, "image_size": 768, "crop_mode": False},  # v2 optimized
    "large": {"base_size": 1280, "image_size": 1024, "crop_mode": False},
    "gundam": {"base_size": 1024, "image_size": 768, "crop_mode": True},  # v2 optimized
}
```

## Other OCR Scripts

### Nanonets OCR (`nanonets-ocr.py`, `nanonets-ocr2.py`)
✅ Both versions working

### PaddleOCR-VL (`paddleocr-vl.py`)
✅ Working

---

## Future: OCR Smoke Test Dataset

**Status:** Idea (noted 2026-02-12)

Build a small curated dataset (`uv-scripts/ocr-smoke-test`?) with ~2-5 samples from diverse sources. Purpose: fast CI-style verification that scripts still work after dep updates, without downloading full datasets.

**Design goals:**
- Tiny (~20-30 images total) so download is seconds not minutes
- Covers the axes that break things: document type, image quality, language, layout complexity
- Has ground truth text where possible for quality regression checks
- All permissively licensed (CC0/CC-BY preferred)

**Candidate sources:**

| Source | What it covers | Why |
|--------|---------------|-----|
| `NationalLibraryOfScotland/medical-history-of-british-india` | Historical English, degraded scans | Has hand-corrected `text` column for comparison. CC0. Already tested with GLM-OCR. |
| `davanstrien/ufo-ColPali` | Mixed modern documents | Already used as our go-to test set. Varied layouts. |
| Something with **tables** | Structured data extraction | Tests `--task table` modes. Maybe a financial report or census page. |
| Something with **formulas/LaTeX** | Math notation | Tests `--task formula`. arXiv pages or textbook scans. |
| Something **multilingual** (CJK, Arabic, etc.) | Non-Latin scripts | GLM-OCR claims zh/ja/ko support. Good to verify. |
| Something **handwritten** | Handwriting recognition | Edge case that reveals model limits. |

**How it would work:**
```bash
# Quick smoke test for any script
uv run glm-ocr.py uv-scripts/ocr-smoke-test smoke-out --max-samples 5
# Or a dedicated test runner that checks all scripts against it
```

**Open questions:**
- Build as a proper HF dataset, or just a folder of images in the repo?
- Should we include expected output for regression testing (fragile if models change)?
- Could we add a `--smoke-test` flag to each script that auto-uses this dataset?
- Worth adding to HF Jobs scheduled runs for ongoing monitoring?

---

**Last Updated:** 2026-02-12
**Watch PRs:**
- DeepSeek-OCR-2: https://github.com/vllm-project/vllm/pull/33165